【综述专栏】DeepGNN: 图神经网络如何变深
在科学研究中,从方法论上来讲,都应“先见森林,再见树木”。当前,人工智能学术研究方兴未艾,技术迅猛发展,可谓万木争荣,日新月异。对于AI从业者来说,在广袤的知识森林中,系统梳理脉络,才能更好地把握趋势。为此,我们精选国内外优秀的综述文章,开辟“综述专栏”,敬请关注。
01
02
03
04
05
06
07
python scripts/train.py --dataset cora --model gcnii --task node_classification
from cogdl.experiment import experiment
experiment(task="node_classification", model="gcnii", dataset="cora")
from cogdl.layers import RevGNNLayer, ResGNNLayer
from cogdl.layers import GCNLayer
from cogdl.datasets import build_dataset_from_name
data = build_dataset_from_name("cora").data
# 通过CogDL中的图神经网络层GCNLayer/ResGNNLayer/RevGNNLayer构建一个深层GNN
class RevGCN(BaseModel):
def __init__(self, in_feats, out_feats, nhidden, nlayer):
self.input_fc = nn.Linear(in_feats, nhidden)
self.output_fc = nn.Lineare(nhidden, out_feats)
self.layers = nn.ModuleList()
for i in range(nlayer):
gcn_layer = GCNLayer(data.num_features, data.num_classes)
# 得到Pre-activation的残差连接GCN层
resgcn_layer = ResGCNLayer(gcn_layer)
# 得到Reversible的GCN层
revgcn_layer = RevGNNLayer(resgcn_layer)
self.layers.append(revgcn_layer)
def forward(self, graph):
x = graph.x
h = self.input_fc(x)
for layer in self.layers:
h = layer(graph, h)
return self.output_fc(h)
deepgnn = RevGNN(data.num_features, data.num_classes, 64, nlayer=100)
08
References
[1] Cen, Yukuo, et al. "CogDL: An extensive toolkit for deep learning on graphs." arXiv preprint arXiv:2103.00959 (2021).
[2] Xu, Keyulu, et al. "Representation learning on graphs with jumping knowledge networks." ICML'18.
[3] Klicpera, Johannes, Aleksandar Bojchevski, and Stephan Günnemann. "Predict then propagate: Graph neural networks meet personalized pagerank."ICLR'19.
[4] Chen, Ming, et al. "Simple and deep graph convolutional networks." ICML'20.
[5] Li, Guohao, et al. "Deepergcn: All you need to train deeper gcns." arXiv preprint arXiv:2006.07739 (2020).
[6] Li, Guohao, et al. "Deepgcns: Can gcns go as deep as cnns?." ICCV'19.
[7] Li, Guohao, et al. "Training Graph Neural Networks with 1000 Layers." ICML'21.
[8] Gomez, Aidan N., et al. "The reversible residual network: Backpropagation without storing activations." NeuIPS'17.
[9] Lv, Qingsong, et al. "Are we really making much progress? Revisiting, benchmarking, and refining heterogeneous graph neural networks." KDD'21.
[10] Feng, Wenzheng, et al. "Graph Random Neural Network for Semi-Supervised Learning on Graphs." NeuIPS'20.
[11] Qiu, Jiezhong, et al. "LightNE: A Lightweight Graph Processing System for Network Embedding." SIGMOD'21.
[12] Zou, Xu, et al. "TDGIA: Effective Injection Attacks on Graph Neural Networks." KDD'21.
[13] Huang, Tinglin, et al. "MixGCF: An Improved Training Method for Graph Neural Network-based Recommender Systems." KDD'21.
本文目的在于学术交流,并不代表本公众号赞同其观点或对其内容真实性负责,版权归原作者所有,如有侵权请告知删除。
“综述专栏”历史文章
Few-shot Learning 小白入门笔记
联邦学习:从集中化学习到分布式现场学习的发展过程
对比学习的浅显总结及其在NLP下的应用
强化学习(Reinforcement Learning)知识整理
领域自适应(Domain Adaptation)的理论分析
医学图像半监督分割的一些感触
干饭人,干饭魂,搞懂图神经网络稳饭盆
少样本学习综述
Bayesian Neural Networks:贝叶斯神经网络
少样本学习综述:小样本学习研究综述(中科院计算所)
自监督学习的一些思考
详解深度学习中的Normalization,BN/LN/WN
自监督学习看这篇就够了!
一文读懂Faster RCNN
当可解释人工智能遇上知识图谱
更多综述专栏文章,
请点击文章底部“阅读原文”查看
分享、点赞、在看,给个三连击呗!