TY - JOUR
T1 - Degree-based stratification of nodes in Graph Neural Networks
AU - Ali, Ameen
AU - Wolf, Lior
AU - Cevikalp, Hakan
N1 - Publisher Copyright:
© 2023 A. Ali, L. Wolf & H. Cevikalp.
PY - 2023
Y1 - 2023
N2 - Despite much research, Graph Neural Networks (GNNs) still do not display the favorable scaling properties of other deep neural networks such as Convolutional Neural Networks and Transformers. Previous work has identified issues such as oversmoothing of the latent representation and have suggested solutions such as skip connections and sophisticated normalization schemes. Here, we propose a different approach that is based on a stratification of the graph nodes. We provide motivation that the nodes in a graph can be stratified into those with a low degree and those with a high degree and that the two groups are likely to behave differently. Based on this motivation, we modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group. This simple-to-implement modification seems to improve performance across datasets and GNN methods. To verify that this increase in performance is not only due to the added capacity, we also perform the same modification for random splits of the nodes, which does not lead to any improvement.
AB - Despite much research, Graph Neural Networks (GNNs) still do not display the favorable scaling properties of other deep neural networks such as Convolutional Neural Networks and Transformers. Previous work has identified issues such as oversmoothing of the latent representation and have suggested solutions such as skip connections and sophisticated normalization schemes. Here, we propose a different approach that is based on a stratification of the graph nodes. We provide motivation that the nodes in a graph can be stratified into those with a low degree and those with a high degree and that the two groups are likely to behave differently. Based on this motivation, we modify the Graph Neural Network (GNN) architecture so that the weight matrices are learned, separately, for the nodes in each group. This simple-to-implement modification seems to improve performance across datasets and GNN methods. To verify that this increase in performance is not only due to the added capacity, we also perform the same modification for random splits of the nodes, which does not lead to any improvement.
KW - graph neural networks
KW - message passing
KW - node degree
UR - http://www.scopus.com/inward/record.url?scp=85189635571&partnerID=8YFLogxK
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.conferencearticle???
AN - SCOPUS:85189635571
SN - 2640-3498
VL - 222
SP - 15
EP - 27
JO - Proceedings of Machine Learning Research
JF - Proceedings of Machine Learning Research
T2 - 15th Asian Conference on Machine Learning, ACML 2023
Y2 - 11 November 2023 through 14 November 2023
ER -