By knob-twisting γ, β and ρ, the framework creates sparse or dense, assortative or disassortative nets - perfect for stress-testing new GNNs.By knob-twisting γ, β and ρ, the framework creates sparse or dense, assortative or disassortative nets - perfect for stress-testing new GNNs.

Stress-Test Node & Link Models with One Click: Meet HypNF

Abstract and 1. Introduction

  1. Related work

  2. HypNF Model

    3.1 HypNF Model

    3.2 The S1/H2 model

    3.3 Assigning labels to nodes

  3. HypNF benchmarking framework

  4. Experiments

    5.1 Parameter Space

    5.2 Machine learning models

  5. Results

  6. Conclusion, Acknowledgments and Disclosure of Funding, and References

    \

A. Empirical validation of HypNF

B. Degree distribution and clustering control in HypNF

C. Hyperparameters of the machine learning models

D. Fluctuations in the performance of machine learning models

E. Homophily in the synthetic networks

F. Exploring the parameters’ space

3 HypNF Model

\

3.1 The S1/H2 model

\

\

\

\

3.2 The bipartite-S1/H2 model

\

\

\

3.3 Assigning labels to nodes

\

\

4 HypNF benchmarking framework

The HypNF benchmarking framework depicted in Fig. 1 combines the S1/H2 and bipartite-S1/H2 models within a unified similarity space. Additionally, it incorporates a method for label assignment. This integration facilitates the creation of networks exhibiting a wide range of structural properties and varying degrees of correlation between nodes and their features. Specifically, our framework allows us to control the following properties:

\

\

\

\

\ Leveraging the HypNF model with varying parameters, our benchmarking framework generates diverse graph-structured data. This allows for the evaluation of graph machine learning models on networks with different connectivity patterns and correlations between topology and node features. For tasks like NC and LP, the framework facilitates fair model comparisons, helping to assess a novel GNN against state-of-the-art architectures and providing insights into the data’s impact on performance.

\ Figure 2: The impact of topology-feature correlation controlled by β and βb on the performance of graph machine learning models in two tasks: (a) node classification and (b) link prediction. We set NL = 6 and α = 10 for the NC task. The box ranges from the first quartile to the third quartile. A horizontal line goes through the box at the median. The whiskers go from each quartile to the minimum or maximum. The results are averaged over all other parameters.

\

:::info Authors:

(1) Roya Aliakbarisani, this author contributed equally from Universitat de Barcelona & UBICS (roya_aliakbarisani@ub.edu);

(2) Robert Jankowski, this author contributed equally from Universitat de Barcelona & UBICS (robert.jankowski@ub.edu);

(3) M. Ángeles Serrano, Universitat de Barcelona, UBICS & ICREA (marian.serrano@ub.edu);

(4) Marián Boguñá, Universitat de Barcelona & UBICS (marian.boguna@ub.edu).

:::


:::info This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.

:::

\

Market Opportunity
NODE Logo
NODE Price(NODE)
$0.0149
$0.0149$0.0149
-0.40%
USD
NODE (NODE) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact service@support.mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.