GAT 和图对比学习都是当前图表示学习领域极其广泛应用的技术。本文介绍的基于可学习图增强的邻居监督图对比学习方法 NCLA,可被视为采用点-点图对比学习来对比 multi-head GAT 中不同的 head。实验结果表明,当节点标签非常有限时,自监督的 NCLA 方法可取得比半监督的 multi-head GAT 更高的准确率,这表明图对比学习的确有助于学习更高质量的节点嵌入。 NCLA 方法的主要创新点如下: 1)现有的图对比学习方法通常采用人为设计的图增强策略,需要手动为每个图数据集选择合适的图增强参数,极大限制了图对比学习方法的效率和泛化能力。NCLA 方法采用多头图注意力机制端对端地自动学习图增强参数,可自动兼容到不同的图数据集,具有更强的泛化能力。另外,相比于人为图增强策略,NCLA 方法提出的基于注意力机制的可学习图增强方案更安全,可在不破坏下游任务相关信息的条件下,生成具有一定差异性的增强视图。 2)现有的图对比学习方法通常直接采用 CV 领域提出的对比损失,而忽略考虑了图的拓扑结构信息,从而导致学到的节点嵌入表示与图的同质性假设相矛盾。NCLA 方法设计了针对图结构数据的邻居监督图对比损失,采用拓扑结构作为监督信号来定义图对比学习中的正负样本。 3)在标准的图对比学习范式中,图增强和节点嵌入学习通常分两个阶段进行,可能需要两级优化。而在 NCLA 方法中,图增强和节点嵌入是一体化端到端学习的。 4)实验结果表明,当节点标签极其缺乏时,自监督的 NCLA 方法取得的节点分类准确率始终优于 SOTA(自监督和半监督)图对比学习方法,甚至优于一些半监督图神经网络。
参考文献
[1] X. Shen, D. Sun, S. Pan, X. Zhou, and L. T. Yang, "Neighbor Contrastive Learning on Learnable Graph Augmentation," in Proceedings of the AAAI Conference on Artificial Intelligence, 2023, pp. 9782-9791.[2] K. Hassani and A. H. Khasahmadi, "Contrastive multi-view representation learning on graphs," in International Conference on Machine Learning, 2020, pp. 4116-4126.[3] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. J. a. p. a. Wang, "Deep graph contrastive representation learning," 2020.[4] Y. Zhu, Y. Xu, F. Yu, Q. Liu, S. Wu, and L. Wang, "Graph contrastive learning with adaptive augmentation," in Proceedings of the Web Conference 2021, 2021, pp. 2069-2080.[5] P. Velickovic, W. Fedus, W. L. Hamilton, P. Liò, Y. Bengio, and R. D. J. I. Hjelm, "Deep Graph Infomax," vol. 2, no. 3, p. 4, 2019.[6] Y. You, T. Chen, Y. Sui, T. Chen, Z. Wang, and Y. J. A. i. N. I. P. S. Shen, "Graph contrastive learning with augmentations," vol. 33, pp. 5812-5823, 2020.[7] A. v. d. Oord, Y. Li, and O. J. a. p. a. Vinyals, "Representation learning with contrastive predictive coding," 2018.[8] T. Chen, S. Kornblith, M. Norouzi, and G. Hinton, "A simple framework for contrastive learning of visual representations," in International conference on machine learning, 2020, pp. 1597-1607.[9] J. Qiu, Q. Chen, Y. Dong, J. Zhang, H. Yang, M. Ding, K. Wang, and J. Tang, "Gcc: Graph contrastive coding for graph neural network pre-training," in Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 2020, pp. 1150-1160.[10] S. Wan, S. Pan, J. Yang, and C. Gong, "Contrastive and generative graph convolutional networks for graph-based semi-supervised learning," in Proceedings of the AAAI Conference on Artificial Intelligence, 2021, pp. 10049-10057.[11] M. McPherson, L. Smith-Lovin, and J. M. J. A. r. o. s. Cook, "Birds of a feather: Homophily in social networks," pp. 415-444, 2001.[12] T. N. Kipf and M. J. a. p. a. Welling, "Semi-supervised classification with graph convolutional networks," 2016.[13] P. Veličković, G. Cucurull, A. Casanova, A. Romero, P. Lio, and Y. J. a. p. a. Bengio, "Graph attention networks," 2017.[14] S. Wan, Y. Zhan, L. Liu, B. Yu, S. Pan, and C. J. A. i. N. I. P. S. Gong, "Contrastive graph poisson networks: Semi-supervised learning with extremely limited labels," vol. 34, pp. 6316-6327, 2021.[15] Z. Peng, W. Huang, M. Luo, Q. Zheng, Y. Rong, T. Xu, and J. Huang, "Graph representation learning via graphical mutual information maximization," in Proceedings of The Web Conference 2020, 2020, pp. 259-270.[16] Y. Mo, L. Peng, J. Xu, X. Shi, and X. Zhu, "Simple unsupervised graph representation learning," 2022.[17] N. Lee, J. Lee, and C. Park, "Augmentation-free self-supervised learning on graphs," in Proceedings of the AAAI Conference on Artificial Intelligence, 2022, pp. 7372-7380.