TY - JOUR
T1 - Graph Kernel Neural Networks
AU - Cosmo, Luca
AU - Minello, Giorgia
AU - Bicciato, Alessandro
AU - Bronstein, Michael M.
AU - Rodolà, Emanuele
AU - Rossi, Luca
AU - Torsello, Andrea
N1 - Publisher Copyright:
© 2012 IEEE.
PY - 2025
Y1 - 2025
N2 - The convolution operator at the core of many modern neural architectures can effectively be seen as performing a dot product between an input matrix and a filter. While this is readily applicable to data such as images, which can be represented as regular grids in the Euclidean space, extending the convolution operator to work on graphs proves more challenging, due to their irregular structure. In this article, we propose to use graph kernels, i.e., kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain. This allows us to define an entirely structural model that does not require computing the embedding of the input graph. Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability in terms of the structural masks that are learned during the training process, similar to what happens for convolutional masks in traditional convolutional neural networks (CNNs). We perform an extensive ablation study to investigate the model hyperparameters’ impact and show that our model achieves competitive performance on standard graph classification and regression datasets.
AB - The convolution operator at the core of many modern neural architectures can effectively be seen as performing a dot product between an input matrix and a filter. While this is readily applicable to data such as images, which can be represented as regular grids in the Euclidean space, extending the convolution operator to work on graphs proves more challenging, due to their irregular structure. In this article, we propose to use graph kernels, i.e., kernel functions that compute an inner product on graphs, to extend the standard convolution operator to the graph domain. This allows us to define an entirely structural model that does not require computing the embedding of the input graph. Our architecture allows to plug-in any type of graph kernels and has the added benefit of providing some interpretability in terms of the structural masks that are learned during the training process, similar to what happens for convolutional masks in traditional convolutional neural networks (CNNs). We perform an extensive ablation study to investigate the model hyperparameters’ impact and show that our model achieves competitive performance on standard graph classification and regression datasets.
KW - Deep learning
KW - graph kernel
KW - graph neural network (GNN)
UR - http://www.scopus.com/inward/record.url?scp=105002580018&partnerID=8YFLogxK
U2 - 10.1109/TNNLS.2024.3400850
DO - 10.1109/TNNLS.2024.3400850
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 38814768
AN - SCOPUS:105002580018
SN - 2162-237X
VL - 36
SP - 6257
EP - 6270
JO - IEEE Transactions on Neural Networks and Learning Systems
JF - IEEE Transactions on Neural Networks and Learning Systems
IS - 4
ER -