TY - JOUR
T1 - Graph-in-Graph (GiG)
T2 - Learning interpretable latent graphs in non-Euclidean domain for biological and healthcare applications
AU - Zaripova, Kamilia
AU - Cosmo, Luca
AU - Kazi, Anees
AU - Ahmadi, Seyed Ahmad
AU - Bronstein, Michael M.
AU - Navab, Nassir
N1 - Publisher Copyright:
© 2023 Elsevier B.V.
PY - 2023/8
Y1 - 2023/8
N2 - Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain. Two prominent examples are molecule property prediction and brain connectome analysis. Importantly, recent works have shown that considering relationships between input data samples has a positive regularizing effect on the downstream task in healthcare applications. These relationships are naturally modeled by a (possibly unknown) graph structure between input samples. In this work, we propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications that exploits the graph representation of the input data samples and their latent relation. We assume an initially unknown latent-graph structure between graph-valued input data and propose to learn a parametric model for message passing within and across input graph samples, end-to-end along with the latent structure connecting the input graphs. Further, we introduce a Node Degree Distribution Loss (NDDL) that regularizes the predicted latent relationships structure. This regularization can significantly improve the downstream task. Moreover, the obtained latent graph can represent patient population models or networks of molecule clusters, providing a level of interpretability and knowledge discovery in the input domain, which is of particular value in healthcare.
AB - Graphs are a powerful tool for representing and analyzing unstructured, non-Euclidean data ubiquitous in the healthcare domain. Two prominent examples are molecule property prediction and brain connectome analysis. Importantly, recent works have shown that considering relationships between input data samples has a positive regularizing effect on the downstream task in healthcare applications. These relationships are naturally modeled by a (possibly unknown) graph structure between input samples. In this work, we propose Graph-in-Graph (GiG), a neural network architecture for protein classification and brain imaging applications that exploits the graph representation of the input data samples and their latent relation. We assume an initially unknown latent-graph structure between graph-valued input data and propose to learn a parametric model for message passing within and across input graph samples, end-to-end along with the latent structure connecting the input graphs. Further, we introduce a Node Degree Distribution Loss (NDDL) that regularizes the predicted latent relationships structure. This regularization can significantly improve the downstream task. Moreover, the obtained latent graph can represent patient population models or networks of molecule clusters, providing a level of interpretability and knowledge discovery in the input domain, which is of particular value in healthcare.
KW - Graph deep learning
KW - Knowledge discovery
UR - http://www.scopus.com/inward/record.url?scp=85162812901&partnerID=8YFLogxK
U2 - 10.1016/j.media.2023.102839
DO - 10.1016/j.media.2023.102839
M3 - ???researchoutput.researchoutputtypes.contributiontojournal.article???
C2 - 37263109
AN - SCOPUS:85162812901
SN - 1361-8415
VL - 88
JO - Medical Image Analysis
JF - Medical Image Analysis
M1 - 102839
ER -