Publication Details
Diana Gomes, Diana Gomes, Frederik Ruelens, Kyriakos Efthymiadis, , Peter Vrancx

I Can't Believe It's Not Better Workshop: Understanding Deep Learning Through Empirical Falsification

Contribution To Book Anthology


Graph neural networks (GNNs) are commonly applied to graph data, but their performance is often poorly understood. It is easy to find examples in which a GNN is unable to learn useful graph representations, but generally hard to explain why. In this work, we analyse the effectiveness of graph representations learned by shallow GNNs (2-layers) for input graphs with different structural properties and feature information. We expand on the failure cases by decoupling the impact of structural and feature information on the learning process. Our results indicate that GNNs' implicit architectural assumptions are tightly related to the structural properties of the input graph and may impair its learning ability. In case of mismatch, they can often be outperformed by structure-agnostic methods like multi-layer perceptron.