Skip to main content

The Surprising Power of Graph Neural Networks with Random Node Initialization

Ralph Abboud‚ İsmail İlkan Ceylan‚ Martin Grohe and Thomas Lukasiewicz


Graph neural networks (GNNs) are effective models for representation learning on graph-structured data. However, standard GNNs are limited in their expressive power, as they cannot distinguish graphs beyond the capability of the Weisfeiler- Leman (1-WL) graph isomorphism heuristic. Nonetheless, GNNs have shown a promising performance when enhanced with random node initialization (RNI), where the idea is to train and run the models with randomized initial node features. In this paper, we analyze the expressive power of GNNs with RNI, and pose the following question: are GNNs with RNI more expressive than GNNs? We prove that this is indeed the case, by showing that GNNs with RNI are universal, a first such result for GNNs not relying on computationally demanding higher- order properties. We then empirically analyze the effect of RNI on GNNs, based on carefully constructed datasets. Our empirical findings support the superior performance of GNNs with RNI over standard GNNs.

Book Title
Proceedings of the 30th International Joint Conference on Artificial Intelligence‚ IJCAI 2021‚ August 21–26‚ 2021