When Does Self-Supervision Help Graph Convolutional Networks?

Part of Proceedings of the International Conference on Machine Learning 1 pre-proceedings (ICML 2020)

Bibtex »Metadata »Paper »Supplemental »

Authors

Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen

Abstract

Self-supervision as an emerging learning technique has been employed to train convolutional neural networks (CNNs) for more transferrable, generalizable, and robust representation learning of image data. Its introduction to graph convolutional networks (GCNs) operating on graph data is however rarely explored. In this study, we report the first systematic exploration and assessment of incorporating self-supervision into GCNs. We first elaborate three mechanisms to incorporate self-supervision into GCNs, analyze the limitations of pretraining & finetuning and self-training, and proceed to focus on multi-task learning. Moreover, we design three novel self-supervised learning tasks for GCNs with both theoretical rationales and numerical comparisons. Lastly, we further integrate multi-task self-supervision into graph adversarial training. Our results show that, with properly designed task forms and incorporation mechanisms, self-supervision benefits GCNs in gaining more generalizability and robustness.