查看原文
其他

【专知荟萃01】深度学习知识资料大全集(入门/进阶/论文/代码/数据/综述/领域专家等)(附pdf下载)

2017-11-01 专知内容组 专知

点击上方“专知”关注获取更多AI知识!


【导读】主题荟萃知识是专知的核心功能之一,为用户提供AI领域系统性的知识学习服务。主题荟萃为用户提供全网关于该主题的精华(Awesome)知识资料收录整理,使得AI从业者便捷学习和解决工作问题!在专知人工智能主题知识树基础上,主题荟萃由专业人工编辑和算法工具辅助协作完成,并保持动态更新!另外欢迎对此创作主题荟萃感兴趣的同学,请加入我们专知AI创作者计划,共创共赢! 今天专知为大家呈送第一篇专知主题荟萃-深度学习知识资料全集荟萃 (入门/进阶/论文/代码/数据/专家等),请大家查看!专知访问www.zhuanzhi.ai,  或关注微信公众号后台回复" 专知"进入专知,搜索主题“深度学习”查看。欢迎转发分享!此外,我们也提供pdf下载链接,请文章末尾查看!


了解专知,专知,一个新的认知方式!


专知荟萃-深度学习-目录

  • 入门学习

  • 进阶文章

  • Deep Belief Network(DBN)(Milestone of Deep Learning Eve)

  • ImageNet Evolution(Deep Learning broke out fromhere)

  • Speech Recognition Evolution

  • Model

  • Optimization

  • Unsupervised Learning / Deep Generative Model

  • RNN / Sequence-to-Sequence Model

  • Neural Turing Machine

  • Deep Reinforcement Learning

  • Deep Transfer Learning / Lifelong Learning / especially forRL

  • One Shot Deep Learning

  • NLP(Natural Language Processing)

  • Object Detection

  • Visual Tracking

  • Image Caption

  • Machine Translation

  • Robotics

  • Object Segmentation

  • 综述

  • Tutorial

  • 视频教程

  • Courses

  • Videos and Lectures

  • 代码

  • 领域专家

  • 重要网站收藏

  • 免费在线图书

  • Datasets


Deep Learning

入门学习

  1. 《一天搞懂深度学习》台大 李宏毅 300页PPT PPT:[https://www.slideshare.net/tw_dsconf/ss-62245351\] 视频:[https://www.youtube.com/watch?v=ZrEsLwCjdxY] [https://pan.baidu.com/s/1i4DhD7R]

  2. Deep Learning(深度学习)学习笔记整理系列之(1-8) [http://blog.csdn.net/zouxy09/article/details/8775360]

  3. 深层学习为何要“Deep”(上,下) [https://zhuanlan.zhihu.com/p/22888385?refer=YJango] [https://zhuanlan.zhihu.com/p/24245040]

  4. 《神经网络与深度学习》 作者:邱锡鹏 中文图书 2017 [https://nndl.github.io/]

  5. 《Neural Networks and Deep Learning》 By Michael Nielsen / Aug 2017 原文:[http://neuralnetworksanddeeplearning.com/index.html] 中文翻译:[http://www.liuxiao.org/wp-content/uploads/2016/10/nndl-ebook.pdf] 源码:[https://github.com/mnielsen/neural-networks-and-deep-learning]


进阶文章

Deep Belief Network(DBN)(Milestone of Deep Learning Eve)

  1. Hinton, Geoffrey E., Simon Osindero, and Yee-Whye Teh. "A fast learning algorithm for deep belief nets." Neural computation 18.7 (2006): 1527-1554. http://www.cs.toronto.edu/~hinton/absps/ncfast.pdf

  2. Hinton, Geoffrey E., and Ruslan R. Salakhutdinov. "Reducing the dimensionality of data with neural networks." Science 313.5786 (2006): 504-507. http://www.cs.toronto.edu/~hinton/science.pdf


ImageNet Evolution(Deep Learning broke out from here)

  1. Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. "Imagenet classification with deep convolutional neural networks." Advances in neural information processing systems. 2012. http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf

  2. Simonyan, Karen, and Andrew Zisserman. "Very deep convolutional networks for large-scale image recognition." arXiv preprint arXiv:1409.1556 (2014). https://arxiv.org/pdf/1409.1556.pdf

  3. Szegedy, Christian, et al. "Going deeper with convolutions." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2015. http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf

  4. He, Kaiming, et al. "Deep residual learning for image recognition." arXiv preprint arXiv:1512.03385 (2015). https://arxiv.org/pdf/1512.03385.pdf


 Speech Recognition Evolution

  1. Hinton, Geoffrey, et al. "Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups." IEEE Signal Processing Magazine 29.6 (2012): 82-97. http://cs224d.stanford.edu/papers/maas_paper.pdf

  2. Graves, Alex, Abdel-rahman Mohamed, and Geoffrey Hinton. "Speech recognition with deep recurrent neural networks." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013. http://arxiv.org/pdf/1303.5778.pdf

  3. Graves, Alex, and Navdeep Jaitly. "Towards End-To-End Speech Recognition with Recurrent Neural Networks." ICML. Vol. 14. 2014. [http://www.jmlr.org/proceedings/papers/v32/graves14.pdf]

  4. Sak, Haşim, et al. "Fast and accurate recurrent neural network acoustic models for speech recognition." arXiv preprint arXiv:1507.06947 (2015). [http://arxiv.org/pdf/1507.06947) (Google Speech Recognition System) ]

  5. Amodei, Dario, et al. "Deep speech 2: End-to-end speech recognition in english and mandarin." arXiv preprint arXiv:1512.02595 (2015). https://arxiv.org/pdf/1512.02595.pdf

  6. W. Xiong, J. Droppo, X. Huang, F. Seide, M. Seltzer, A. Stolcke, D. Yu, G. Zweig "Achieving Human Parity in Conversational Speech Recognition." arXiv preprint arXiv:1610.05256 (2016). [https://arxiv.org/pdf/1610.05256v1) (State-of-the-art in speech recognition, Microsoft)


Model

  1. Hinton, Geoffrey E., et al. "Improving neural networks by preventing co-adaptation of feature detectors." arXiv preprint arXiv:1207.0580 (2012). https://arxiv.org/pdf/1207.0580.pdf

  2. Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." Journal of Machine Learning Research 15.1 (2014): 1929-1958. [http://www.jmlr.org/papers/volume15/srivastava14a.old/source/srivastava14a.pdf]

  3. Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." arXiv preprint arXiv:1502.03167 (2015). [http://arxiv.org/pdf/1502.03167) (An outstanding Work in 2015) ]

  4. Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016). [https://arxiv.org/pdf/1607.06450.pdf?utm_source=sciontist.com&utm_medium=refer&utm_campaign=promote) (Update of Batch Normalization) _]

  5. Courbariaux, Matthieu, et al. "Binarized Neural Networks: Training Neural Networks with Weights and Activations Constrained to+ 1 or−1." https://pdfs.semanticscholar.org/f832/b16cb367802609d91d400085eb87d630212a.pdf

  6. Jaderberg, Max, et al. "Decoupled neural interfaces using synthetic gradients." arXiv preprint arXiv:1608.05343 (2016). [https://arxiv.org/pdf/1608.05343) (Innovation of Training Method,Amazing Work) ]

  7. Chen, Tianqi, Ian Goodfellow, and Jonathon Shlens. "Net2net: Accelerating learning via knowledge transfer." arXiv preprint arXiv:1511.05641 (2015). [https://arxiv.org/abs/1511.05641) (Modify previously trained network to reduce training epochs) ]

  8. Wei, Tao, et al. "Network Morphism." arXiv preprint arXiv:1603.01670 (2016). [https://arxiv.org/abs/1603.01670) (Modify previously trained network to reduce training epochs)


Optimization

  1. Sutskever, Ilya, et al. "On the importance of initialization and momentum in deep learning." ICML (3) 28 (2013): 1139-1147. http://www.jmlr.org/proceedings/papers/v28/sutskever13.pdf

  2. Kingma, Diederik, and Jimmy Ba. "Adam: A method for stochastic optimization." arXiv preprint arXiv:1412.6980 (2014). [http://arxiv.org/pdf/1412.6980) (Maybe used most often currently) ]

  3. Andrychowicz, Marcin, et al. "Learning to learn by gradient descent by gradient descent." arXiv preprint arXiv:1606.04474 (2016). [https://arxiv.org/pdf/1606.04474) (Neural Optimizer,Amazing Work) ]

  4. Han, Song, Huizi Mao, and William J. Dally. "Deep compression: Compressing deep neural network with pruning, trained quantization and huffman coding." CoRR, abs/1510.00149 2 (2015). https://pdfs.semanticscholar.org/5b6c/9dda1d88095fa4aac1507348e498a1f2e863.pdf

  5. Iandola, Forrest N., et al. "SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and< 1MB model size." arXiv preprint arXiv:1602.07360 (2016). [http://arxiv.org/pdf/1602.07360) (Also a new direction to optimize NN,DeePhi Tech Startup)


Unsupervised Learning / Deep Generative Model

  1. Le, Quoc V. "Building high-level features using large scale unsupervised learning." 2013 IEEE international conference on acoustics, speech and signal processing. IEEE, 2013. [http://arxiv.org/pdf/1112.6209.pdf&embed) (Milestone, Andrew Ng, Google Brain Project, Cat)

  2. Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013). [http://arxiv.org/pdf/1312.6114) (VAE)

  3. Goodfellow, Ian, et al. "Generative adversarial nets." Advances in Neural Information Processing Systems. 2014. http://papers.nips.cc/paper/5423-generative-adversarial-nets.pdf

  4. Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015). [http://arxiv.org/pdf/1511.06434) (DCGAN)

  5. Gregor, Karol, et al. "DRAW: A recurrent neural network for image generation." arXiv preprint arXiv:1502.04623 (2015). http://jmlr.org/proceedings/papers/v37/gregor15.pdf

  6. Oord, Aaron van den, Nal Kalchbrenner, and Koray Kavukcuoglu. "Pixel recurrent neural networks." arXiv preprint arXiv:1601.06759 (2016). [http://arxiv.org/pdf/1601.06759) (PixelRNN)

  7. Oord, Aaron van den, et al. "Conditional image generation with PixelCNN decoders." arXiv preprint arXiv:1606.05328 (2016). [https://arxiv.org/pdf/1606.05328) (PixelCNN)


RNN / Sequence-to-Sequence Model

  1. Graves, Alex. "Generating sequences with recurrent neural networks." arXiv preprint arXiv:1308.0850 (2013). [http://arxiv.org/pdf/1308.0850) (LSTM, very nice generating result, show the power of RNN) ]

  2. Cho, Kyunghyun, et al. "Learning phrase representations using RNN encoder-decoder for statistical machine translation." arXiv preprint arXiv:1406.1078 (2014). [http://arxiv.org/pdf/1406.1078) (First Seq-to-Seq Paper)

  3. Sutskever, Ilya, Oriol Vinyals, and Quoc V. Le. "Sequence to sequence learning with neural networks." Advances in neural information processing systems. 2014. https://arxiv.org/pdf/1409.3215.pdf

  4. Bahdanau, Dzmitry, KyungHyun Cho, and Yoshua Bengio. "Neural Machine Translation by Jointly Learning to Align and Translate." arXiv preprint arXiv:1409.0473 (2014). [https://arxiv.org/pdf/1409.0473v7.pdf]

  5. Vinyals, Oriol, and Quoc Le. "A neural conversational model." arXiv preprint arXiv:1506.05869 (2015). [http://arxiv.org/pdf/1506.05869.pdf%20http://arxiv.org/pdf/1506.05869.pdf])( (Seq-to-Seq on Chatbot)


Neural Turing Machine

  1. Graves, Alex, Greg Wayne, and Ivo Danihelka. "Neural turing machines." arXiv preprint arXiv:1410.5401 (2014). http://arxiv.org/pdf/1410.5401.pdf

  2. Zaremba, Wojciech, and Ilya Sutskever. "Reinforcement learning neural Turing machines." arXiv preprint arXiv:1505.00521 362 (2015). [https://pdfs.semanticscholar.org/f10e/071292d593fef939e6ef4a59baf0bb3a6c2b.pdf]

  3. Weston, Jason, Sumit Chopra, and Antoine Bordes. "Memory networks." arXiv preprint arXiv:1410.3916 (2014). [http://arxiv.org/pdf/1410.3916) ]

  4. Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. "End-to-end memory networks." Advances in neural information processing systems. 2015. [http://papers.nips.cc/paper/5846-end-to-end-memory-networks.pdf]

  5. Vinyals, Oriol, Meire Fortunato, and Navdeep Jaitly. "Pointer networks." Advances in Neural Information Processing Systems. 2015. [http://papers.nips.cc/paper/5866-pointer-networks.pdf]

  6. Graves, Alex, et al. "Hybrid computing using a neural network with dynamic external memory." Nature (2016). https://www.dropbox.com/s/0a40xi702grx3dq/2016-graves.pdf


Deep Reinforcement Learning

  1. Mnih, Volodymyr, et al. "Playing atari with deep reinforcement learning." arXiv preprint arXiv:1312.5602 (2013). [http://arxiv.org/pdf/1312.5602.pdf]) (First Paper named deep reinforcement learning)

  2. Mnih, Volodymyr, et al. "Human-level control through deep reinforcement learning." Nature 518.7540 (2015): 529-533. https://storage.googleapis.com/deepmind-data/assets/papers/DeepMindNature14236Paper.pdf

  3. Wang, Ziyu, Nando de Freitas, and Marc Lanctot. "Dueling network architectures for deep reinforcement learning." arXiv preprint arXiv:1511.06581 (2015). [http://arxiv.org/pdf/1511.06581) (ICLR best paper,great idea) ]

  4. Mnih, Volodymyr, et al. "Asynchronous methods for deep reinforcement learning." arXiv preprint arXiv:1602.01783 (2016). [http://arxiv.org/pdf/1602.01783) (State-of-the-art method) ]

  5. Lillicrap, Timothy P., et al. "Continuous control with deep reinforcement learning." arXiv preprint arXiv:1509.02971 (2015). [http://arxiv.org/pdf/1509.02971) (DDPG) ]

  6. Gu, Shixiang, et al. "Continuous Deep Q-Learning with Model-based Acceleration." arXiv preprint arXiv:1603.00748 (2016). [http://arxiv.org/pdf/1603.00748) (NAF) ]

  7. Schulman, John, et al. "Trust region policy optimization." CoRR, abs/1502.05477 (2015). http://www.jmlr.org/proceedings/papers/v37/schulman15.pdf

  8. Silver, David, et al. "Mastering the game of Go with deep neural networks and tree search." Nature 529.7587 (2016): 484-489. http://willamette.edu/~levenick/cs448/goNature.pdf



Deep Transfer Learning / Lifelong Learning / especially for RL

  1. Bengio, Yoshua. "Deep Learning of Representations for Unsupervised and Transfer Learning." ICML Unsupervised and Transfer Learning 27 (2012): 17-36. http://www.jmlr.org/proceedings/papers/v27/bengio12a/bengio12a.pdf

  2. Silver, Daniel L., Qiang Yang, and Lianghao Li. "Lifelong Machine Learning Systems: Beyond Learning Algorithms." AAAI Spring Symposium: Lifelong Machine Learning. 2013. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.696.7800&rep=rep1&type=pdf

  3. Hinton, Geoffrey, Oriol Vinyals, and Jeff Dean. "Distilling the knowledge in a neural network." arXiv preprint arXiv:1503.02531 (2015). [http://arxiv.org/pdf/1503.02531) (Godfather's Work) ]

  4. Rusu, Andrei A., et al. "Policy distillation." arXiv preprint arXiv:1511.06295 (2015). [http://arxiv.org/pdf/1511.06295) (RL domain) ]

  5. Parisotto, Emilio, Jimmy Lei Ba, and Ruslan Salakhutdinov. "Actor-mimic: Deep multitask and transfer reinforcement learning." arXiv preprint arXiv:1511.06342 (2015). [http://arxiv.org/pdf/1511.06342) (RL domain) ]

  6. Rusu, Andrei A., et al. "Progressive neural networks." arXiv preprint arXiv:1606.04671 (2016). [https://arxiv.org/pdf/1606.04671) (Outstanding Work, A novel idea)


One Shot Deep Learning

  1. Lake, Brenden M., Ruslan Salakhutdinov, and Joshua B. Tenenbaum. "Human-level concept learning through probabilistic program induction." Science 350.6266 (2015): 1332-1338. http://clm.utexas.edu/compjclub/wp-content/uploads/2016/02/lake2015.pdf

  2. Koch, Gregory, Richard Zemel, and Ruslan Salakhutdinov. "Siamese Neural Networks for One-shot Image Recognition."(2015) [http://www.cs.utoronto.ca/~gkoch/files/msc-thesis.pdf]

  3. Santoro, Adam, et al. "One-shot Learning with Memory-Augmented Neural Networks." arXiv preprint arXiv:1605.06065 (2016). [http://arxiv.org/pdf/1605.06065) (A basic step to one shot learning)

  4. Vinyals, Oriol, et al. "Matching Networks for One Shot Learning." arXiv preprint arXiv:1606.04080 (2016). [https://arxiv.org/pdf/1606.04080)

  5. Hariharan, Bharath, and Ross Girshick. "Low-shot visual object recognition." arXiv preprint arXiv:1606.02819 (2016). [http://arxiv.org/pdf/1606.02819) (A step to large data)


NLP(Natural Language Processing)

  1. Antoine Bordes, et al. "Joint Learning of Words and Meaning Representations for Open-Text Semantic Parsing." AISTATS(2012) [https://www.hds.utc.fr/~bordesan/dokuwiki/lib/exe/fetch.php?id=en%3Apubli&cache=cache&media=en:bordes12aistats.pdf]

  2. Mikolov, et al. "Distributed representations of words and phrases and their compositionality." ANIPS(2013): 3111-3119 http://papers.nips.cc/paper/5021-distributed-representations-of-words-and-phrases-and-their-compositionality.pdf

  3. Sutskever, et al. "“Sequence to sequence learning with neural networks." ANIPS(2014) [http://papers.nips.cc/paper/5346-sequence-to-sequence-learning-with-neural-networks.pdf]

  4. Ankit Kumar, et al. "“Ask Me Anything: Dynamic Memory Networks for Natural Language Processing." arXiv preprint arXiv:1506.07285(2015) [https://arxiv.org/abs/1506.07285) ]

  5. Yoon Kim, et al. "Character-Aware Neural Language Models." NIPS(2015) arXiv preprint arXiv:1508.06615(2015) [https://arxiv.org/abs/1508.06615) ]

  6. Jason Weston, et al. "Towards AI-Complete Question Answering: A Set of Prerequisite Toy Tasks." arXiv preprint arXiv:1502.05698(2015) [https://arxiv.org/abs/1502.05698) (bAbI tasks) ]

  7. Karl Moritz Hermann, et al. "Teaching Machines to Read and Comprehend." arXiv preprint arXiv:1506.03340(2015) [https://arxiv.org/abs/1506.03340) (CNN/DailyMail cloze style questions) ]

  8. Alexis Conneau, et al. "Very Deep Convolutional Networks for Natural Language Processing." arXiv preprint arXiv:1606.01781(2016) [https://arxiv.org/abs/1606.01781) (state-of-the-art in text classification)

  9. Armand Joulin, et al. "Bag of Tricks for Efficient Text Classification." arXiv preprint arXiv:1607.01759(2016) [https://arxiv.org/abs/1607.01759) (slightly worse than state-of-the-art, but a lot faster)



Object Detection

  1. Szegedy, Christian, Alexander Toshev, and Dumitru Erhan. "Deep neural networks for object detection." Advances in Neural Information Processing Systems. 2013. [http://papers.nips.cc/paper/5207-deep-neural-networks-for-object-detection.pdf]

  2. Girshick, Ross, et al. "Rich feature hierarchies for accurate object detection and semantic segmentation." Proceedings of the IEEE conference on computer vision and pattern recognition. 2014. http://www.cv-foundation.org/openaccess/content_cvpr_2014/papers/Girshick_Rich_Feature_Hierarchies_2014_CVPR_paper.pdf

  3. He, Kaiming, et al. "Spatial pyramid pooling in deep convolutional networks for visual recognition." European Conference on Computer Vision. Springer International Publishing, 2014. [http://arxiv.org/pdf/1406.4729) (SPPNet)

  4. Girshick, Ross. "Fast r-cnn." Proceedings of the IEEE International Conference on Computer Vision. 2015. [https://pdfs.semanticscholar.org/8f67/64a59f0d17081f2a2a9d06f4ed1cdea1a0ad.pdf]

  5. Ren, Shaoqing, et al. "Faster R-CNN: Towards real-time object detection with region proposal networks." Advances in neural information processing systems. 2015. [http://papers.nips.cc/paper/5638-analysis-of-variational-bayesian-latent-dirichlet-allocation-weaker-sparsity-than-map.pdf]

  6. Redmon, Joseph, et al. "You only look once: Unified, real-time object detection." arXiv preprint arXiv:1506.02640 (2015). http://homes.cs.washington.edu/~ali/papers/YOLO.pdf

  7. Liu, Wei, et al. "SSD: Single Shot MultiBox Detector." arXiv preprint arXiv:1512.02325 (2015). [http://arxiv.org/pdf/1512.02325)

  8. Dai, Jifeng, et al. "R-FCN: Object Detection viaRegion-based Fully Convolutional Networks." arXiv preprint arXiv:1605.06409 (2016). [https://arxiv.org/abs/1605.06409)

  9. He, Gkioxari, et al. "Mask R-CNN" ICCV2017 Best Paper (2017). [https://arxiv.org/abs/1703.06870) ]



Visual Tracking

  1. Wang, Naiyan, and Dit-Yan Yeung. "Learning a deep compact image representation for visual tracking." Advances in neural information processing systems. 2013. http://papers.nips.cc/paper/5192-learning-a-deep-compact-image-representation-for-visual-tracking.pdf

  2. Wang, Naiyan, et al. "Transferring rich feature hierarchies for robust visual tracking." arXiv preprint arXiv:1501.04587 (2015). [http://arxiv.org/pdf/1501.04587) (SO-DLT)

  3. Wang, Lijun, et al. "Visual tracking with fully convolutional networks." Proceedings of the IEEE International Conference on Computer Vision. 2015. http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/Wang_Visual_Tracking_With_ICCV_2015_paper.pdf

  4. Held, David, Sebastian Thrun, and Silvio Savarese. "Learning to Track at 100 FPS with Deep Regression Networks." arXiv preprint arXiv:1604.01802 (2016). [http://arxiv.org/pdf/1604.01802) (GOTURN,Really fast as a deep learning method,but still far behind un-deep-learning methods)

  5. Bertinetto, Luca, et al. "Fully-Convolutional Siamese Networks for Object Tracking." arXiv preprint arXiv:1606.09549 (2016). [https://arxiv.org/pdf/1606.09549) (SiameseFC,New state-of-the-art for real-time object tracking)

  6. Martin Danelljan, Andreas Robinson, Fahad Khan, Michael Felsberg. "Beyond Correlation Filters: Learning Continuous Convolution Operators for Visual Tracking." ECCV (2016) http://www.cvl.isy.liu.se/research/objrec/visualtracking/conttrack/C-COT_ECCV16.pdf

  7. Nam, Hyeonseob, Mooyeol Baek, and Bohyung Han. "Modeling and Propagating CNNs in a Tree Structure for Visual Tracking." arXiv preprint arXiv:1608.07242 (2016). [https://arxiv.org/pdf/1608.07242) (VOT2016 Winner,TCNN)


Image Caption

  1. Farhadi,Ali,etal. "Every picture tells a story: Generating sentences from images". In Computer VisionECCV 2010. Springer Berlin Heidelberg:15-29, 2010. [https://www.cs.cmu.edu/~afarhadi/papers/sentence.pdf]

  2. Kulkarni, Girish, et al. "Baby talk: Understanding and generating image descriptions". In Proceedings of the 24th CVPR, 2011. [http://tamaraberg.com/papers/generation_cvpr11.pdf]

  3. Vinyals, Oriol, et al. "Show and tell: A neural image caption generator". In arXiv preprint arXiv:1411.4555, 2014. [https://arxiv.org/pdf/1411.4555.pdf]

  4. Donahue, Jeff, et al. "Long-term recurrent convolutional networks for visual recognition and description". In arXiv preprint arXiv:1411.4389 ,2014. [https://arxiv.org/pdf/1411.4389.pdf]

  5. Karpathy, Andrej, and Li Fei-Fei. "Deep visual-semantic alignments for generating image descriptions". In arXiv preprint arXiv:1412.2306, 2014. [https://cs.stanford.edu/people/karpathy/cvpr2015.pdf]

  6. Karpathy, Andrej, Armand Joulin, and Fei Fei F. Li. "Deep fragment embeddings for bidirectional image sentence mapping". In Advances in neural information processing systems, 2014. [https://arxiv.org/pdf/1406.5679v1.pdf]

  7. Fang, Hao, et al. "From captions to visual concepts and back". In arXiv preprint arXiv:1411.4952, 2014. [https://arxiv.org/pdf/1411.4952v3.pdf]

  8. Chen, Xinlei, and C. Lawrence Zitnick. "Learning a recurrent visual representation for image caption generation". In arXiv preprint arXiv:1411.5654, 2014. [https://arxiv.org/pdf/1411.5654v1.pdf]

  9. Mao, Junhua, et al. "Deep captioning with multimodal recurrent neural networks (m-rnn)". In arXiv preprint arXiv:1412.6632, 2014. [https://arxiv.org/pdf/1412.6632v5.pdf]

  10. Xu, Kelvin, et al. "Show, attend and tell: Neural image caption generation with visual attention". In arXiv preprint arXiv:1502.03044, 2015. [https://arxiv.org/pdf/1502.03044v3.pdf]


Machine Translation

  1. Luong, Minh-Thang, et al. "Addressing the rare word problem in neural machine translation." arXiv preprint arXiv:1410.8206 (2014). [http://arxiv.org/pdf/1410.8206)

  2. Sennrich, et al. "Neural Machine Translation of Rare Words with Subword Units". In arXiv preprint arXiv:1508.07909, 2015. [https://arxiv.org/pdf/1508.07909.pdf]

  3. Luong, Minh-Thang, Hieu Pham, and Christopher D. Manning. "Effective approaches to attention-based neural machine translation." arXiv preprint arXiv:1508.04025 (2015). [http://arxiv.org/pdf/1508.04025)

  4. Chung, et al. "A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation". In arXiv preprint arXiv:1603.06147, 2016. [https://arxiv.org/pdf/1603.06147.pdf]

  5. Lee, et al. "Fully Character-Level Neural Machine Translation without Explicit Segmentation". In arXiv preprint arXiv:1610.03017, 2016. [https://arxiv.org/pdf/1610.03017.pdf]

  6. Wu, Schuster, Chen, Le, et al. "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". In arXiv preprint arXiv:1609.08144v2, 2016. https://arxiv.org/pdf/1609.08144v2.pdf



Robotics

  1. Koutník, Jan, et al. "Evolving large-scale neural networks for vision-based reinforcement learning." Proceedings of the 15th annual conference on Genetic and evolutionary computation. ACM, 2013. [http://repository.supsi.ch/4550/1/koutnik2013gecco.pdf]

  2. Levine, Sergey, et al. "End-to-end training of deep visuomotor policies." Journal of Machine Learning Research 17.39 (2016): 1-40. [http://www.jmlr.org/papers/volume17/15-522/15-522.pdf]

  3. Pinto, Lerrel, and Abhinav Gupta. "Supersizing self-supervision: Learning to grasp from 50k tries and 700 robot hours." arXiv preprint arXiv:1509.06825 (2015). [http://arxiv.org/pdf/1509.06825)

  4. Levine, Sergey, et al. "Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection." arXiv preprint arXiv:1603.02199 (2016). [http://arxiv.org/pdf/1603.02199)

  5. Zhu, Yuke, et al. "Target-driven Visual Navigation in Indoor Scenes using Deep Reinforcement Learning." arXiv preprint arXiv:1609.05143 (2016). [https://arxiv.org/pdf/1609.05143)

  6. Yahya, Ali, et al. "Collective Robot Reinforcement Learning with Distributed Asynchronous Guided Policy Search." arXiv preprint arXiv:1610.00673 (2016). [https://arxiv.org/pdf/1610.00673)

  7. Gu, Shixiang, et al. "Deep Reinforcement Learning for Robotic Manipulation." arXiv preprint arXiv:1610.00633 (2016). [https://arxiv.org/pdf/1610.00633)

  8. A Rusu, M Vecerik, Thomas Rothörl, N Heess, R Pascanu, R Hadsell."Sim-to-Real Robot Learning from Pixels with Progressive Nets." arXiv preprint arXiv:1610.04286 (2016). [https://arxiv.org/pdf/1610.04286.pdf]

  9. Mirowski, Piotr, et al. "Learning to navigate in complex environments." arXiv preprint arXiv:1611.03673 (2016). [https://arxiv.org/pdf/1611.03673)


Object Segmentation

  1. J. Long, E. Shelhamer, and T. Darrell, “Fully convolutional networks for semantic segmentation.” in CVPR, 2015. [https://arxiv.org/pdf/1411.4038v2.pdf]

  2. L.-C. Chen, G. Papandreou, I. Kokkinos, K. Murphy, and A. L. Yuille. "Semantic image segmentation with deep convolutional nets and fully connected crfs." In ICLR, 2015. [https://arxiv.org/pdf/1606.00915v1.pdf]

  3. Pinheiro, P.O., Collobert, R., Dollar, P. "Learning to segment object candidates." In: NIPS. 2015. [https://arxiv.org/pdf/1506.06204v2.pdf]

  4. Dai, J., He, K., Sun, J. "Instance-aware semantic segmentation via multi-task network cascades." in CVPR. 2016 [https://arxiv.org/pdf/1512.04412v1.pdf]

  5. Dai, J., He, K., Sun, J. "Instance-sensitive Fully Convolutional Networks." arXiv preprint arXiv:1603.08678 (2016). [https://arxiv.org/pdf/1603.08678v1.pdf]



综述

  1. LeCun, Yann, Yoshua Bengio, and Geoffrey Hinton. "Deep learning." Nature 521.7553 (2015): 436-444. (Three Giants' Survey)

  2.  Representation Learning: A Review and New Perspectives, Yoshua Bengio, Aaron Courville, Pascal Vincent, Arxiv, 2012.



Tutorial

  1. UFLDL Tutorial 1[http://deeplearning.stanford.edu/wiki/index.php/UFLDL_Tutorial]

  2. UFLDL Tutorial 2[http://ufldl.stanford.edu/tutorial/supervised/LinearRegression/]

  3. Deep Learning for NLP (without Magic)[http://www.socher.org/index.php/DeepLearningTutorial/DeepLearningTutorial]

  4. A Deep Learning Tutorial: From Perceptrons to Deep Networks[http://www.toptal.com/machine-learning/an-introduction-to-deep-learning-from-perceptrons-to-deep-networks]

  5. Deep Learning from the Bottom up[http://www.metacademy.org/roadmaps/rgrosse/deep_learning]

  6. Theano Tutorial[http://deeplearning.net/tutorial/deeplearning.pdf]

  7. Neural Networks for Matlab[http://uk.mathworks.com/help/pdf_doc/nnet/nnet_ug.pdf]

  8. Using convolutional neural nets to detect facial keypoints tutorial[http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/]

  9. Pytorch Tutorials[[http://pytorch.org/tutorials/]]

  10. The Best Machine Learning Tutorials On The Web[https://github.com/josephmisiti/machine-learning-module]

  11. VGG Convolutional Neural Networks Practical[http://www.robots.ox.ac.uk/~vgg/practicals/cnn/index.html]

  12. TensorFlow tutorials[https://github.com/nlintz/TensorFlow-Tutorials]

  13. More TensorFlow tutorials[https://github.com/pkmital/tensorflow_tutorials]

  14. TensorFlow Python Notebooks[https://github.com/aymericdamien/TensorFlow-Examples]

  15. Keras and Lasagne Deep Learning Tutorials[https://github.com/Vict0rSch/deep_learning]

  16. Classification on raw time series in TensorFlow with a LSTM RNN[https://github.com/guillaume-chevalier/LSTM-Human-Activity-Recognition]

  17. Using convolutional neural nets to detect facial keypoints tutorial[http://danielnouri.org/notes/2014/12/17/using-convolutional-neural-nets-to-detect-facial-keypoints-tutorial/]

  18. TensorFlow-World[https://github.com/astorfi/TensorFlow-World]

  19. Deep Learning NIPS’2015 Tutorial Geoff Hinton, Yoshua Bengio & Yann LeCun 深度学习三巨头共同主持 [http://www.iro.umontreal.ca/~bengioy/talks/DL-Tutorial-NIPS2015.pdf\]


视频教程

Courses

  1. Machine Learning - Stanford [https://class.coursera.org/ml-005] by Andrew Ng in Coursera (2010-2014)

  2. Machine Learning - Caltech[http://work.caltech.edu/lectures.htmlby] Yaser Abu-Mostafa (2012-2014)

  3. Machine Learning - Carnegie Mellon[http://www.cs.cmu.edu/~tom/10701_sp11/lectures.shtmlby] Tom Mitchell (Spring 2011)

  4. Neural Networks for Machine Learning[https://class.coursera.org/neuralnets-2012-001by] Geoffrey Hinton in Coursera (2012)

  5. Neural networks class[https://www.youtube.com/playlist?list=PL6Xpj9I5qXYEcOhn7TqghAJ6NAPrNmUBH] by Hugo Larochelle from Université de Sherbrooke (2013)

  6. Deep Learning Course[http://cilvr.cs.nyu.edu/doku.php?id=deeplearning:slides:start] by CILVR lab @ NYU (2014)

  7. A.I - Berkeley[https://courses.edx.org/courses/BerkeleyX/CS188x_1/1T2013/courseware/] by Dan Klein and Pieter Abbeel (2013)

  8. A.I - MIT[http://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-034-artificial-intelligence-fall-2010/lecture-videos/] by Patrick Henry Winston (2010)

  9. Vision and learning - computers and brains[http://web.mit.edu/course/other/i2course/www/vision_and_learning_fall_2013.htmlby] Shimon Ullman, Tomaso Poggio, Ethan Meyers @ MIT (2013)

  10. Convolutional Neural Networks for Visual Recognition - Stanford[http://vision.stanford.edu/teaching/cs231n/syllabus_winter2015.html] by Fei-Fei Li, Andrej Karpathy (2015)

  11. Convolutional Neural Networks for Visual Recognition - Stanford[http://vision.stanford.edu/teaching/cs231n/syllabus.html] by Fei-Fei Li, Andrej Karpathy (2016)

  12. Deep Learning for Natural Language Processing - Stanford[http://cs224d.stanford.edu/]

  13. Neural Networks - usherbrooke[http://info.usherbrooke.ca/hlarochelle/neural_networks/content.html]

  14. Machine Learning - Oxfordhttps://www.cs.ox.ac.uk/people/nando.defreitas/machinelearning/

  15. Deep Learning - Nvidiahttps://developer.nvidia.com/deep-learning-courses

  16. Graduate Summer School: Deep Learning, Feature Learning[https://www.youtube.com/playlist?list=PLHyI3Fbmv0SdzMHAy0aN59oYnLy5vyyTAby] Geoffrey Hinton, Yoshua Bengio, Yann LeCun, Andrew Ng, Nando de Freitas and several others @ IPAM, UCLA (2012)

  17. Deep Learning - Udacity/Google[https://www.udacity.com/course/deep-learning--ud730] by Vincent Vanhoucke and Arpan Chakraborty (2016)


Videos and Lectures

  1. How To Create A Mind[https://www.youtube.com/watch?v=RIkxVci-R4k] By Ray Kurzweil

  2. Deep Learning, Self-Taught Learning and Unsupervised Feature Learning[https://www.youtube.com/watch?v=n1ViNeWhC24] By Andrew Ng

  3. Recent Developments in Deep Learning[https://www.youtube.com/watch?v=vShMxxqtDDs&ampindex=3&amplist=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT;] By Geoff Hinton

  4. The Unreasonable Effectiveness of Deep Learning[https://www.youtube.com/watch?v=sc-KbuZqGkI] by Yann LeCun

  5. Deep Learning of Representations[https://www.youtube.com/watch?v=4xsVFLnHC_0] by Yoshua bengio

  6. Principles of Hierarchical Temporal Memory[https://www.youtube.com/watch?v=6ufPpZDmPKA] by Jeff Hawkins

  7. Machine Learning Discussion Group - Deep Learning w/ Stanford AI Lab[https://www.youtube.com/watch?v=2QJi0ArLq7s&amplist=PL78U8qQHXgrhP9aZraxTT5-X1RccTcUYT;] by Adam Coates

  8. Making Sense of the World with Deep Learning[http://vimeo.com/80821560] By Adam Coates

  9. Demystifying Unsupervised Feature Learning [https://www.youtube.com/watch?v=wZfVBwOO0-kBy] Adam Coates

  10. Visual Perception with Deep Learning[https://www.youtube.com/watch?v=3boKlkPBckABy] Yann LeCun

  11. The Next Generation of Neural Networks[https://www.youtube.com/watch?v=AyzOUbkUf3M] By Geoffrey Hinton at GoogleTechTalks

  12. The wonderful and terrifying implications of computers that can learn[http://www.ted.com/talks/jeremy_howard_the_wonderful_and_terrifying_implications_of_computers_that_can_learn] By Jeremy Howard at TEDxBrussels

  13. Unsupervised Deep Learning - Stanford[http://web.stanford.edu/class/cs294a/handouts.html] by Andrew Ng in Stanford (2011)

  14. Natural Language Processing[http://web.stanford.edu/class/cs224n/handouts/] By Chris Manning in Stanford

  15. A beginners Guide to Deep Neural Networks[http://googleresearch.blogspot.com/2015/09/a-beginners-guide-to-deep-neural.html] By Natalie Hammel and Lorraine Yurshansky

  16. Deep Learning: Intelligence from Big Data[https://www.youtube.com/watch?v=czLI3oLDe8Mby] Steve Jurvetson (and panel) at VLAB in Stanford.

  17. Introduction to Artificial Neural Networks and Deep Learning[https://www.youtube.com/watch?v=FoO8qDB8gUU] by Leo Isikdogan at Motorola Mobility HQ

  18. NIPS 2016 lecture and workshop videos[https://nips.cc/Conferences/2016/Schedule-] NIPS 2016


代码

  1. Caffe[http://caffe.berkeleyvision.org/]

  2. Torch7[http://torch.ch/]

  3. Theano[http://deeplearning.net/software/theano/]

  4. cuda-convnet[https://code.google.com/p/cuda-convnet2/]

  5. convetjs[https://github.com/karpathy/convnetjs]

  6. Ccv[http://libccv.org/doc/doc-convnet/]

  7. NuPIC[http://numenta.org/nupic.html]

  8. DeepLearning4J[http://deeplearning4j.org/]

  9. Brain[https://github.com/harthur/brain]

  10. DeepLearnToolbox[https://github.com/rasmusbergpalm/DeepLearnToolbox]

  11. Deepnet[https://github.com/nitishsrivastava/deepnet]

  12. Deeppy[https://github.com/andersbll/deeppy]

  13. JavaNN[https://github.com/ivan-vasilev/neuralnetworks]

  14. hebel[https://github.com/hannes-brt/hebel]

  15. Mocha.jl[https://github.com/pluskid/Mocha.jl]

  16. OpenDL[https://github.com/guoding83128/OpenDL]

  17. cuDNN[https://developer.nvidia.com/cuDNN]

  18. MGL[http://melisgl.github.io/mgl-pax-world/mgl-manual.html]

  19. Knet.jl[https://github.com/denizyuret/Knet.jl]

  20. Nvidia DIGITS - a web app based on Caffe[https://github.com/NVIDIA/DIGITS]

  21. Neon - Python based Deep Learning Framework[https://github.com/NervanaSystems/neon]

  22. Keras - Theano based Deep Learning Library[http://keras.io]

  23. Chainer - A flexible framework of neural networks for deep learning[http://chainer.org/]

  24. RNNLM Toolkit[http://rnnlm.org/]

  25. RNNLIB - A recurrent neural network library[http://sourceforge.net/p/rnnl/wiki/Home/]

  26. char-rnn[https://github.com/karpathy/char-rnn]

  27. MatConvNet: CNNs for MATLAB[https://github.com/vlfeat/matconvnet]

  28. Minerva - a fast and flexible tool for deep learning on multi-GPU[https://github.com/dmlc/minerva]

  29. Brainstorm - Fast, flexible and fun neural networks.[https://github.com/IDSIA/brainstorm]

  30. Tensorflow - Open source software library for numerical computation using data flow graphs[https://github.com/tensorflow/tensorflow]

  31. DMTK - Microsoft Distributed Machine Learning Tookit[https://github.com/Microsoft/DMTK]

  32. Scikit Flow - Simplified interface for TensorFlow [mimicking Scikit Learn][https://github.com/google/skflow]

  33. MXnet - Lightweight, Portable, Flexible Distributed/Mobile Deep Learning framework[https://github.com/dmlc/mxnet/]

  34. Veles - Samsung Distributed machine learning platform[https://github.com/Samsung/veles]

  35. Marvin - A Minimalist GPU-only N-Dimensional ConvNets Framework[https://github.com/PrincetonVision/marvin]

  36. Apache SINGA - A General Distributed Deep Learning Platform[http://singa.incubator.apache.org/]

  37. DSSTNE - Amazon's library for building Deep Learning models[https://github.com/amznlabs/amazon-dsstne]

  38. SyntaxNet - Google's syntactic parser - A TensorFlow dependency library[https://github.com/tensorflow/models/tree/master/syntaxnet]

  39. mlpack - A scalable Machine Learning library[http://mlpack.org/]

  40. Torchnet - Torch based Deep Learning Library[https://github.com/torchnet/torchnet]

  41. Paddle - PArallel Distributed Deep LEarning by Baidu[https://github.com/baidu/paddle]

  42. NeuPy - Theano based Python library for ANN and Deep Learning[http://neupy.com]

  43. Lasagne - a lightweight library to build and train neural networks in Theano[https://github.com/Lasagne/Lasagne]

  44. nolearn - wrappers and abstractions around existing neural network libraries, most notably Lasagne[https://github.com/dnouri/nolearn]

  45. Sonnet - a library for constructing neural networks by Google's DeepMind[https://github.com/deepmind/sonnet]

  46. PyTorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration[https://github.com/pytorch/pytorch]

  47. CNTK - Microsoft Cognitive Toolkit[https://github.com/Microsoft/CNTK]



领域专家

  1. Andrej Karpathy [http://cs.stanford.edu/~karpathy/]

  2. Andrew M. Saxe [http://www.stanford.edu/~asaxe/]

  3. Andrew Ng [http://www.cs.stanford.edu/people/ang/]

  4. Andrew W. Senior [http://research.google.com/pubs/author37792.html]

  5. Andriy Mnih [http://www.gatsby.ucl.ac.uk/~amnih/]

  6. Ayse Naz Erkan [http://www.cs.nyu.edu/~naz/]

  7. Benjamin Schrauwen [http://reslab.elis.ugent.be/benjamin]

  8. Bernardete Ribeiro [https://www.cisuc.uc.pt/people/show/2020]

  9. Bo David Chen [http://vision.caltech.edu/~bchen3/Site/Bo_David_Chen.html]

  10. Boureau Y-Lan [http://cs.nyu.edu/~ylan/]

  11. Brian Kingsbury [http://researcher.watson.ibm.com/researcher/view.php?person=us-bedk]

  12. Christopher Manning [http://nlp.stanford.edu/~manning/]

  13. Clement Farabet [http://www.clement.farabet.net/]

  14. Dan Claudiu Cireșan [http://www.idsia.ch/~ciresan/]

  15. David Reichert [http://serre-lab.clps.brown.edu/person/david-reichert/]

  16. Derek Rose [http://mil.engr.utk.edu/nmil/member/5.html]

  17. Dong Yu [http://research.microsoft.com/en-us/people/dongyu/default.aspx]

  18. Drausin Wulsin [http://www.seas.upenn.edu/~wulsin/]

  19. Erik M. Schmidt [http://music.ece.drexel.edu/people/eschmidt]

  20. Eugenio Culurciello [https://engineering.purdue.edu/BME/People/viewPersonById?resource_id=71333]

  21. Frank Seide [http://research.microsoft.com/en-us/people/fseide/]

  22. Galen Andrew [http://homes.cs.washington.edu/~galen/]

  23. Geoffrey Hinton [http://www.cs.toronto.edu/~hinton/]

  24. George Dahl [http://www.cs.toronto.edu/~gdahl/]

  25. Graham Taylor [http://www.uoguelph.ca/~gwtaylor/]

  26. Grégoire Montavon [http://gregoire.montavon.name/]

  27. Guido Francisco Montúfar [http://personal-homepages.mis.mpg.de/montufar/]

  28. Guillaume Desjardins [http://brainlogging.wordpress.com/]

  29. Hannes Schulz [http://www.ais.uni-bonn.de/~schulz/]

  30. Hélène Paugam-Moisy [http://www.lri.fr/~hpaugam/]

  31. Honglak Lee [http://web.eecs.umich.edu/~honglak/]

  32. Hugo Larochelle [http://www.dmi.usherb.ca/~larocheh/index_en.html]

  33. Ilya Sutskever [http://www.cs.toronto.edu/~ilya/]

  34. Itamar Arel [http://mil.engr.utk.edu/nmil/member/2.html]

  35. James Martens [http://www.cs.toronto.edu/~jmartens/]

  36. Jason Morton [http://www.jasonmorton.com/]

  37. Jason Weston [http://www.thespermwhale.com/jaseweston/]

  38. Jeff Dean [http://research.google.com/pubs/jeff.html]

  39. Jiquan Mgiam [http://cs.stanford.edu/~jngiam/]

  40. Joseph Turian [http://www-etud.iro.umontreal.ca/~turian/]

  41. Joshua Matthew Susskind [http://aclab.ca/users/josh/index.html]

  42. Jürgen Schmidhuber [http://www.idsia.ch/~juergen/]

  43. Justin A. Blanco [https://sites.google.com/site/blancousna/]

  44. Koray Kavukcuoglu [http://koray.kavukcuoglu.org/]

  45. KyungHyun Cho [http://users.ics.aalto.fi/kcho/]

  46. Li Deng [http://research.microsoft.com/en-us/people/deng/]

  47. Lucas Theis [http://www.kyb.tuebingen.mpg.de/nc/employee/details/lucas.html]

  48. Ludovic Arnold [http://ludovicarnold.altervista.org/home/]

  49. Marc'Aurelio Ranzato [http://www.cs.nyu.edu/~ranzato/]

  50. Martin Längkvist [http://aass.oru.se/~mlt/]

  51. Misha Denil [http://mdenil.com/]

  52. Mohammad Norouzi [http://www.cs.toronto.edu/~norouzi/]

  53. Nando de Freitas [http://www.cs.ubc.ca/~nando/]

  54. Navdeep Jaitly [http://www.cs.utoronto.ca/~ndjaitly/]

  55. Nicolas Le Roux [http://nicolas.le-roux.name/]

  56. Nitish Srivastava [http://www.cs.toronto.edu/~nitish/]

  57. Noel Lopes [https://www.cisuc.uc.pt/people/show/2028]

  58. Oriol Vinyals [http://www.cs.berkeley.edu/~vinyals/]

  59. Pascal Vincent [http://www.iro.umontreal.ca/~vincentp]

  60. Patrick Nguyen [https://sites.google.com/site/drpngx/]

  61. Pedro Domingos [http://homes.cs.washington.edu/~pedrod/]

  62. Peggy Series [http://homepages.inf.ed.ac.uk/pseries/]

  63. Pierre Sermanet [http://cs.nyu.edu/~sermanet]

  64. Piotr Mirowski [http://www.cs.nyu.edu/~mirowski/]

  65. Quoc V. Le [http://ai.stanford.edu/~quocle/]

  66. Reinhold Scherer [http://bci.tugraz.at/scherer/]

  67. Richard Socher [http://www.socher.org/]

  68. Rob Fergus [http://cs.nyu.edu/~fergus/pmwiki/pmwiki.php]

  69. Robert Coop [http://mil.engr.utk.edu/nmil/member/19.html]

  70. Robert Gens [http://homes.cs.washington.edu/~rcg/]

  71. Roger Grosse [http://people.csail.mit.edu/rgrosse/]

  72. Ronan Collobert [http://ronan.collobert.com/]

  73. Ruslan Salakhutdinov [http://www.utstat.toronto.edu/~rsalakhu/]

  74. Sebastian Gerwinn [http://www.kyb.tuebingen.mpg.de/nc/employee/details/sgerwinn.html]

  75. Stéphane Mallat [http://www.cmap.polytechnique.fr/~mallat/]

  76. Sven Behnke [http://www.ais.uni-bonn.de/behnke/]

  77. Tapani Raiko [http://users.ics.aalto.fi/praiko/]

  78. Tara Sainath [https://sites.google.com/site/tsainath/]

  79. Tijmen Tieleman [http://www.cs.toronto.edu/~tijmen/]

  80. Tom Karnowski [http://mil.engr.utk.edu/nmil/member/36.html]

  81. Tomáš Mikolov [https://research.facebook.com/tomas-mikolov]

  82. Ueli Meier [http://www.idsia.ch/~meier/]

  83. Vincent Vanhoucke [http://vincent.vanhoucke.com]

  84. Volodymyr Mnih [http://www.cs.toronto.edu/~vmnih/]

  85. Yann LeCun [http://yann.lecun.com/]

  86. Yichuan Tang [http://www.cs.toronto.edu/~tang/]

  87. Yoshua Bengio [http://www.iro.umontreal.ca/~bengioy/yoshua_en/index.html]

  88. Yotaro Kubo [http://yota.ro/]

  89. Youzhi [Will] Zou [http://ai.stanford.edu/~wzou]

  90. Fei-Fei Li [http://vision.stanford.edu/feifeili]

  91. Ian Goodfellow [https://research.google.com/pubs/105214.html]

  92. Robert Laganière [http://www.site.uottawa.ca/~laganier/]


重要网站收藏

  1. deeplearning.net[http://deeplearning.net/]

  2. deeplearning.stanford.edu[http://deeplearning.stanford.edu/]

  3. nlp.stanford.edu[http://nlp.stanford.edu/]

  4. ai-junkie.com[http://www.ai-junkie.com/ann/evolved/nnt1.html]

  5. cs.brown.edu/research/ai[http://cs.brown.edu/research/ai/]

  6. eecs.umich.edu/ai[http://www.eecs.umich.edu/ai/]

  7. cs.utexas.edu/users/ai-lab[http://www.cs.utexas.edu/users/ai-lab/]

  8. cs.washington.edu/research/ai[http://www.cs.washington.edu/research/ai/]

  9. aiai.ed.ac.uk[http://www.aiai.ed.ac.uk/]

  10. www-aig.jpl.nasa.gov[http://www-aig.jpl.nasa.gov/]

  11. csail.mit.edu[http://www.csail.mit.edu/]

  12. cgi.cse.unsw.edu.au/~aishare[http://cgi.cse.unsw.edu.au/~aishare/]

  13. cs.rochester.edu/research/ai[http://www.cs.rochester.edu/research/ai/]

  14. ai.sri.com[http://www.ai.sri.com/]

  15. isi.edu/AI/isd.htm[http://www.isi.edu/AI/isd.htm]

  16. nrl.navy.mil/itd/aic[http://www.nrl.navy.mil/itd/aic/]

  17. hips.seas.harvard.edu[http://hips.seas.harvard.edu/]

  18. AI Weekly[http://aiweekly.co]

  19. stat.ucla.edu[http://www.stat.ucla.edu/~junhua.mao/m-RNN.html]

  20. deeplearning.cs.toronto.edu[http://deeplearning.cs.toronto.edu/i2t]

  21. jeffdonahue.com/lrcn/[http://jeffdonahue.com/lrcn/]

  22. visualqa.org[http://www.visualqa.org/]

  23. www.mpi-inf.mpg.de/departments/computer-vision...[https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/]

  24. Deep Learning News[http://news.startup.ml/]

  25. Machine Learning is Fun! Adam Geitgey's Blog[https://medium.com/@ageitgey/]


免费在线图书

  1. Deep Learning[http://www.iro.umontreal.ca/~bengioy/dlbook/] by Yoshua Bengio, Ian Goodfellow and Aaron Courville [05/07/2015] 中文版:[https://github.com/exacity/deeplearningbook-chinese]

  2. Neural Networks and Deep Learning[http://neuralnetworksanddeeplearning.com/] by Michael Nielsen [Dec 2014]

  3. Deep Learning[http://research.microsoft.com/pubs/209355/DeepLearning-NowPublishing-Vol7-SIG-039.pdf] by Microsoft Research [2013]

  4. Deep Learning Tutorial[http://deeplearning.net/tutorial/deeplearning.pdf] by LISA lab, University of Montreal [Jan 6 2015]

  5. neuraltalk[https://github.com/karpathy/neuraltalk] by Andrej Karpathy : numpy-based RNN/LSTM implementation

  6. An introduction to genetic algorithms[https://svn-d1.mpi-inf.mpg.de/AG1/MultiCoreLab/papers/ebook-fuzzy-mitchell-99.pdf]

  7. Artificial Intelligence: A Modern Approach[http://aima.cs.berkeley.edu/]

  8. Deep Learning in Neural Networks: An Overview[http://arxiv.org/pdf/1404.7828v4.pdf]


Datasets

  1. MNIST[http://yann.lecun.com/exdb/mnist/] Handwritten digits

  2. Google House Numbers[http://ufldl.stanford.edu/housenumbers/] from street view

  3. CIFAR-10 and CIFAR-100[http://www.cs.toronto.edu/~kriz/cifar.html]

  4. IMAGENET[http://www.image-net.org/]

  5. Tiny Images[http://groups.csail.mit.edu/vision/TinyImages/] 80 Million tiny images6.

  6. Flickr Data[https://yahooresearch.tumblr.com/post/89783581601/one-hundred-million-creative-commons-flickr-images] 100 Million Yahoo dataset

  7. Berkeley Segmentation Dataset 500[http://www.eecs.berkeley.edu/Research/Projects/CS/vision/bsds/]

  8. UC Irvine Machine Learning Repository[http://archive.ics.uci.edu/ml/]

  9. Flickr 8k[http://nlp.cs.illinois.edu/HockenmaierGroup/Framing_Image_Description/KCCA.html]

  10. Flickr 30k[http://shannon.cs.illinois.edu/DenotationGraph/]

  11. Microsoft COCO[http://mscoco.org/home/]

  12. VQA[http://www.visualqa.org/]

  13. Image QA[http://www.cs.toronto.edu/~mren/imageqa/data/cocoqa/]

  14. AT&T Laboratories Cambridge face database[http://www.uk.research.att.com/facedatabase.html]

  15. AVHRR Pathfinder[http://xtreme.gsfc.nasa.gov]

  16. Air Freight[http://www.anc.ed.ac.uk/~amos/afreightdata.html] - The Air Freight data set is a ray-traced image sequence along with ground truth segmentation based on textural characteristics. [455 images + GT, each 160x120 pixels]. [Formats: PNG]

  17. Amsterdam Library of Object Images[http://www.science.uva.nl/~aloi/] - ALOI is a color image collection of one-thousand small objects, recorded for scientific purposes. In order to capture the sensory variation in object recordings, we systematically varied viewing angle, illumination angle, and illumination color for each object, and additionally captured wide-baseline stereo images. We recorded over a hundred images of each object, yielding a total of 110,250 images for the collection. [Formats: png]

  18. Annotated face, hand, cardiac & meat images[http://www.imm.dtu.dk/~aam/] - Most images & annotations are supplemented by various ASM/AAM analyses using the AAM-API. [Formats: bmp,asf]

  19. Image Analysis and Computer Graphics[http://www.imm.dtu.dk/image/]

  20. Brown University Stimuli[http://www.cog.brown.edu/~tarr/stimuli.html] - A variety of datasets including geons, objects, and "greebles". Good for testing recognition algorithms. [Formats: pict]

  21. CAVIAR video sequences of mall and public space behavior[http://homepages.inf.ed.ac.uk/rbf/CAVIARDATA1/] - 90K video frames in 90 sequences of various human activities, with XML ground truth of detection and behavior classification [Formats: MPEG2 & JPEG]

  22. Machine Vision Unit[http://www.ipab.inf.ed.ac.uk/mvu/]

  23. CCITT Fax standard images[http://www.cs.waikato.ac.nz/~singlis/ccitt.html] - 8 images [Formats: gif]

  24. CMU CIL's Stereo Data with Ground Truth[cil-ster.html] - 3 sets of 11 images, including color tiff images with spectroradiometry [Formats: gif, tiff]

  25. CMU PIE Database[http://www.ri.cmu.edu/projects/project_418.html] - A database of 41,368 face images of 68 people captured under 13 poses, 43 illuminations conditions, and with 4 different expressions.

  26. CMU VASC Image Database[http://www.ius.cs.cmu.edu/idb/] - Images, sequences, stereo pairs [thousands of images] [Formats: Sun Rasterimage]

  27. Caltech Image Database[http://www.vision.caltech.edu/html-files/archive.html] - about 20 images - mostly top-down views of small objects and toys. [Formats: GIF]

  28. Columbia-Utrecht Reflectance and Texture Database[http://www.cs.columbia.edu/CAVE/curet/] - Texture and reflectance measurements for over 60 samples of 3D texture, observed with over 200 different combinations of viewing and illumination directions. [Formats: bmp]

  29. Computational Colour Constancy Data[http://www.cs.sfu.ca/~colour/data/index.html] - A dataset oriented towards computational color constancy, but useful for computer vision in general. It includes synthetic data, camera sensor data, and over 700 images. [Formats: tiff]

  30. Computational Vision Lab[http://www.cs.sfu.ca/~colour/]

  31. Content-based image retrieval database[http://www.cs.washington.edu/research/imagedatabase/groundtruth/] - 11 sets of color images for testing algorithms for content-based retrieval. Most sets have a description file with names of objects in each image. [Formats: jpg]

  32. Efficient Content-based Retrieval Group[http://www.cs.washington.edu/research/imagedatabase/]

  33. Densely Sampled View Spheres[http://ls7-www.cs.uni-dortmund.de/~peters/pages/research/modeladaptsys/modeladaptsys_vba_rov.html] - Densely sampled view spheres - upper half of the view sphere of two toy objects with 2500 images each. [Formats: tiff]

  34. Computer Science VII [Graphical Systems][http://ls7-www.cs.uni-dortmund.de/]

  35. Digital Embryos[https://web-beta.archive.org/web/20011216051535/vision.psych.umn.edu/www/kersten-lab/demos/digitalembryo.html] - Digital embryos are novel objects which may be used to develop and test object recognition systems. They have an organic appearance. [Formats: various formats are available on request]

  36. Univerity of Minnesota Vision Lab[http://vision.psych.umn.edu/www/kersten-lab/kersten-lab.html]

  37. El Salvador Atlas of Gastrointestinal VideoEndoscopy[http://www.gastrointestinalatlas.com] - Images and Videos of his-res of studies taken from Gastrointestinal Video endoscopy. [Formats: jpg, mpg, gif]

  38. FG-NET Facial Aging Database[http://sting.cycollege.ac.cy/~alanitis/fgnetaging/index.htm] - Database contains 1002 face images showing subjects at different ages. [Formats: jpg]

  39. FVC2000 Fingerprint Databases[http://bias.csr.unibo.it/fvc2000/] - FVC2000 is the First International Competition for Fingerprint Verification Algorithms. Four fingerprint databases constitute the FVC2000 benchmark [3520 fingerprints in all].

  40. Biometric Systems Lab[http://bias.csr.unibo.it/research/biolab] - University of Bologna

  41. Face and Gesture images and image sequences[http://www.fg-net.org] - Several image datasets of faces and gestures that are ground truth annotated for benchmarking

  42. German Fingerspelling Database[http://www-i6.informatik.rwth-aachen.de/~dreuw/database.html] - The database contains 35 gestures and consists of 1400 image sequences that contain gestures of 20 different persons recorded under non-uniform daylight lighting conditions. [Formats: mpg,jpg]

  43. Language Processing and Pattern Recognition[http://www-i6.informatik.rwth-aachen.de/]



特注:

因篇幅限制,不能全部显示,完整版,请登录www.zhuanzhi.ai或者点击阅读原文,顶端搜索“深度学习” 主题,查看获得深度学习专知荟萃全集知识等资料!如下图所示~ 


此外,请关注专知公众号(扫一扫最下面专知二维码,或者点击上方蓝色专知),

  • 后台回复“dl” 就可以获取专知深度学习荟萃知识资料pdf下载链接~~


更多专知荟萃知识资料全集获取,请查看:

【干货荟萃】机器学习&深度学习知识资料大全集(一)(论文/教程/代码/书籍/数据/课程等)

【GAN货】生成对抗网络知识资料全集(论文/代码/教程/视频/文章等)

【干货】Google GAN之父Ian Goodfellow ICCV2017演讲:解读生成对抗网络的原理与应用

【AlphaGoZero核心技术】深度强化学习知识资料全集(论文/代码/教程/视频/文章等)


欢迎转发到你的微信群和朋友圈,分享专业AI知识!

请扫描小助手,加入专知人工智能群,交流分享~

获取更多关于机器学习以及人工智能知识资料,请访问www.zhuanzhi.ai,  或者点击阅读原文,即可得到!

-END-

欢迎使用专知

专知,一个新的认知方式!目前聚焦在人工智能领域为AI从业者提供专业可信的知识分发服务, 包括主题定制、主题链路、搜索发现等服务,帮你又好又快找到所需知识。


使用方法>>访问www.zhuanzhi.ai, 或点击文章下方“阅读原文”即可访问专知


中国科学院自动化研究所专知团队

@2017 专知

专 · 知

关注我们的公众号,获取最新关于专知以及人工智能的资讯、技术、算法、深度干货等内容。扫一扫下方关注我们的微信公众号。


点击“阅读原文”,使用专知


您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存