其他
【源头活水】Graph: 表现再差,也不进行Pre-Training? Self-Supervised真香!
“问渠那得清如许,为有源头活水来”,通过前沿领域知识的学习,从其他研究领域得到启发,对研究问题的本质有更清晰的认识和理解,是自我提高的不竭源泉。为此,我们特别精选论文阅读笔记,开辟“源头活水”专栏,帮助你广泛而深入的阅读科研文献,敬请关注。
地址:https://zhuanlan.zhihu.com/p/150456349
最近Graph数据上面的预训练引起了大家的广泛关注。
相关的论文如下:
Strategies forPre-trainingGraphNeuralNetworks. 2020. ICLR GPT-GNN: Generative Pre-Training of Graph Neural Networks. 2020. KDD Pre-TrainingGraphNeuralNetworksfor Generic Structural Feature Extraction. 2020 Self-supervised Learning: Generative or Contrastive. 2020. Gaining insight into SARS-CoV-2 infection and COVID-19 severity using self-supervised edge features and Graph Neural Networks. 2020. ICML When Does Self-Supervision Help Graph Convolutional Networks? 2020. ICML Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labeled Nodes. 2020. AAAI GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training. 2020. KDD Self-Supervised Graph Representation Learning via Global Context Prediction. 2020. Contrastive Multi-View Representation Learning on Graphs. 2020. Self-supervised Training of Graph Convolutional Networks. 2020. Self-supervised Learning on Graphs: Deep Insights and New Directions. 2020. GRAPH-BERT: Only Attention is Needed for Learning Graph Representations. 2020. Graph Neural Distance Metric Learning with GRAPH-BERT. 2020. Segmented GRAPH-BERT for Graph Instance Modeling. 2020.
参考:
https://zhuanlan.zhihu.com/p/124663407
https://zhuanlan.zhihu.com/p/112086408
https://archwalker.github.io/blog/2019/08/08/GNN-Pretraining-1.html
https://zhuanlan.zhihu.com/p/149222809
https://zhuanlan.zhihu.com/p/150281704
https://zhuanlan.zhihu.com/p/150237915
https://zhuanlan.zhihu.com/p/150112070
本文目的在于学术交流,并不代表本公众号赞同其观点或对其内容真实性负责,版权归原作者所有,如有侵权请告知删除。
“源头活水”历史文章
AdaViT: Adaptive Tokens for Efficient Vision Transformer
连接文本和图像的第一步:CLIP
CV预训练MAE(Masked AutoEncoder)
[Meta-Learning]对Reptile的深度解析
用于文本分类的循环卷积神经网络
Meta-Transfer Learning for Few-Shot Learning
PointPillars论文和代码解析
ICLR'21 | GNN联邦学习的新基准
关于talking face generation两篇论文解读
一个具有隐私保护学习的图联邦架构
图上的边信息怎么办:GNNs与edge feature
一行核心代码提升无监督/自监督模型特征表达
车辆意图预测中一种基于因果时间序列的域泛化方法
因果关系检测提高强化学习效率
更多源头活水专栏文章,
请点击文章底部“阅读原文”查看
分享、在看,给个三连击呗!