Improving the Robustness of GraphSAINT via Stability Training

Keywords: Graph Neural Networks (GNNs), Training Stability, Normalization Techniques, GNN Tricks, Link Prediction

Abstract

Graph Neural Networks (GNNs) field has a dramatic development nowadays due to the strong representation capabilities for data in non-Euclidean space, such as graph data. However, as the scale of the dataset continues to expand, sampling is commonly introduced to obtain scalable GNNs, which leads to the instability problem during training. For example, when Graph SAmpling based INductive learning meThod (GraphSAINT) is applied for the link prediction task, it may not converge in training with a probability range from 0.1 to 0.4. This paper proposes the improved GraphSAINTs by introducing two normalization techniques and one Graph Neural Network (GNN) trick into the traditional GraphSAINT to solve the problem of the training stability and obtain more robust training results. The improved GraphSAINTs successfully eliminate the instability during training and improve the robustness of the traditional model. Besides, we can also accelerate the training procedure convergence of the traditional GraphSAINT and obtain a generally higher performance in the prediction accuracy by applying the improved GraphSAINTs. We validate our improved methods by using the experiments on the citation dataset of Open Graph Benchmark (OGB).

Downloads

Download data is not yet available.

References

F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner, and G. Monfardini, “The graph neural network model,” IEEE Transactions on Neural Networks, vol. 20, no. 1, pp. 61–80, 2009.

D. Liben-Nowell and J. Kleinberg, “The link prediction problem for social networks,” in Proceedings of the twelfth international conference on information and knowledge management, 2003, pp. 556–559.

J. Chen, T. Ma, and C. Xiao, “FastGCN: Fast learning with graph convolutional networks via importance sampling,” in International conference on learning representations, 2018, pp. 1–15.

J. Lim, S. Ryu, K. Park, Y. J. Choe, J. Ham, and W. Y. Kim, “Predicting drug–target interaction using a novel graph neural network with 3d struc-ture-embedded graph representation,” Journal of chemical information and modeling, vol. 59, no. 9, pp. 3981–3988, 2019.

W. Fan et al., “Graph neural networks for social recommendation,” in The world wide web conference, 2019, pp. 417–426.

S. Wu, Y. Tang, Y. Zhu, L. Wang, X. Xie, and T. Tan, “Session-based recommendation with graph neural networks,” in Proceedings of the AAAI conference on artificial intelligence, 2019, pp. 346–353.

Z. Wu, S. Pan, F. Chen, G. Long, C. Zhang, and P. S. Yu, “A comprehensive survey on graph neural networks,” IEEE Transactions on Neural Networks and Learning Systems, vol. 32, no. 1, pp. 4–24, 2021.

T. N. Kipf and M. Welling, “Semi-supervised classification with graph convolutional networks,” in Proceedings of the 5th international conference on learning representations, 2017, pp. 1–14.

M. Defferrard, X. Bresson, and P. Vandergheynst, “Convolutional neural networks on graphs with fast localized spectral filtering,” in Proceedings of the 30th international conference on neural information processing systems, 2016, pp. 3844–3852.

H. Zeng, H. Zhou, A. Srivastava, R. Kannan, and V. Prasanna, “GraphSAINT: Graph sampling based inductive learning method,” in International conference on learning representations, 2020, pp. 1–19.

W. L. Hamilton, R. Ying, and J. Leskovec, “Inductive representation learning on large graphs,” in Proceedings of the 31st international conference on neural information processing systems, 2017, pp. 1025–1035.

L. Lü and T. Zhou, “Link prediction in complex networks: A survey,” Physica A: Statistical Mechanics and its Applications, vol. 390, no. 6, pp. 1150–1170, 2011.

M. Zhang and Y. Chen, “Link prediction based on graph neural networks,” in Advances in neural information processing systems, 2018, pp. 5165–5175.

W. Hu et al., “Open graph benchmark: Datasets for machine learning on graphs,” in Advances in neural information processing systems, 2020, pp. 22118–22133.

I. Sutskever, J. Martens, G. Dahl, and G. Hinton, “On the importance of initialization and momentum in deep learning,” in Proceedings of the 30th international conference on machine learning, 2013, pp. 1139–1147.

J. Duchi, E. Hazan, and Y. Singer, “Adaptive subgradient methods for online learning and stochastic optimization,” Journal of Machine Learning Research, vol. 12, no. 61, pp. 2121–2159, 2011.

S. Ioffe and C. Szegedy, “Batch normalization: Accelerating deep network training by reducing internal covariate shift,” in Proceedings of the 32nd international conference on international conference on machine learning, 2015, pp. 448–456.

H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” Journal of Statistical Planning and Inference, vol. 90, no. 2, pp. 227–244, 2000.

N. Srivastava, G. Hinton, A. Krizhevsky, I. Sutskever, and R. Salakhutdinov, “Dropout: A simple way to prevent neural networks from overfitting,” The journal of machine learning research, vol. 15, no. 1, pp. 1929–1958, 2014.

J. L. Ba, J. R. Kiros, and G. E. Hinton, “Layer normalization.” 2016 [Online]. Available: https://arxiv.org/abs/1607.06450

Y. Wang, J. Jin, W. Zhang, Y. Yu, Z. Zhang, and D. Wipf, “Bag of tricks for node classification with graph neural networks.” 2021 [Online]. Available: https://arxiv.org/abs/2103.13355

H. Chi, Y. Wang, Q. Hao, and H. Xia, “Residual network and embedding usage: New tricks of node classification with graph convolutional networks.” 2021 [Online]. Available: https://arxiv.org/abs/2105.08330

S. Misra, “A step by step guide for choosing project topics and writing research papers in ICT related disciplines,” in International conference on information and communication technology and applications, 2020, pp. 727–744.

H. Shimodaira, “Improving predictive inference under covariate shift by weighting the log-likelihood function,” Journal of Statistical Planning and Inference, vol. 90, no. 2, pp. 227–244, 2000.

T. Raiko, H. Valpola, and Y. Lecun, “Deep learning made easier by linear transformations in perceptrons,” in Proceedings of the fifteenth international conference on artificial intelligence and statistics, 2012, pp. 924–932.

K. Kong et al., “FLAG: Adversarial data augmentation for graph neural networks.” 2020 [Online]. Available: https://arxiv.org/abs/2010.09891

A. Madry, A. Makelov, L. Schmidt, D. Tsipras, and A. Vladu, “Towards deep learning models resistant to adversarial attacks.” 2019 [Online]. Available: https://arxiv.org/abs/1706.06083

Y. Wang and Q. Hao, “Towards more robust GNN training with graph normalization for GraphSAINT,” Applied Informatics. ICAI 2021. Communications in Computer and Information Science., vol. 1455, pp. 82–93, 2021.

K. Wang, Z. Shen, C. Huang, C.-H. Wu, Y. Dong, and A. Kanakia, “Microsoft Academic Graph: When experts are not enough,” Quantitative Science Studies, vol. 1, no. 1, pp. 396–413, 2020.

Published
2021-12-02
How to Cite
[1]
Y. Wang, H. Chi, and Q. Hao, “Improving the Robustness of GraphSAINT via Stability Training”, paradigmplus, vol. 2, no. 3, pp. 1-13, Dec. 2021.
Section
Articles