[1] F.Petroni,T.Rocktäschel,S.Riedel,P.Lewis,A.Bakhtin,Y.Wu,andA.Miller,“Language
models as knowledge bases?” in Proceedings of the 2019 Conference on Empirical Meth-
ods in Natural Language Processing and the 9th International Joint Conference on Natural
Language Processing (EMNLP-IJCNLP), 2019, pp. 2463–2473. [2] Y. Zhang, A. Warstadt, X. Li, and S. Bowman, “When do you need billions of words of
pretraining data?” in Proceedings of the 59th Annual Meeting of the Association for Com-putational Linguistics and the 11th International Joint Conference on Natural Language
Processing (Volume 1: Long Papers), 2021, pp. 1112–1125. [3] F. Petroni, P. Lewis, A. Piktus, T. Rocktäschel, Y. Wu, A. H. Miller, and S. Riedel, “How
context affects language models’ factual predictions,” in Automated Knowledge Base Con-
struction. [4] Z.Jiang,F.F.Xu,J.Araki,andG.Neubig,“Howcanweknowwhatlanguagemodelsknow?”Transactions of the Association for Computational Linguistics, vol. 8, 2020. [5] K. Guu, K. Lee, Z. Tung, P. Pasupat, and M.-w. Chang, “Realm: Retrieval-augmented lan-
guage model pre,” Training, 2020. [6] M. Ghazvininejad, C. Brockett, M.-W. Chang, B. Dolan, J. Gao, W.-t. Yih, and M. Galley,
“A knowledge-grounded neural conversation model,” in Proceedings of the Thirty-Second
AAAI Conference on Artificial Intelligence and Thirtieth Innovative Applications of Artificial
Intelligence Conference and Eighth AAAI Symposium on Educational Advances in Artificial
Intelligence, 2018, pp. 5110–5117. [7] Z. Li, C. Niu, F. Meng, Y. Feng, Q. Li, and J. Zhou, “Incremental transformer with delib-
eration decoder for document grounded conversations,” in Proceedings of the 57th Annual
Meeting of the Association for Computational Linguistics, 2019, pp. 12–21. [8] K. Shuster, S. Poff, M. Chen, D. Kiela, and J. Weston, “Retrieval augmentation reduces
hallucination in conversation,” in Findings of the Association for Computational Linguistics:
EMNLP 2021, 2021, pp. 3784–3803. [9] J. Jung, B. Son, and S. Lyu, “Attnio: Knowledge graph exploration with in-and-out attention
flow for knowledge-grounded dialogue,” in Proceedings of the 2020 Conference on Empiri-
cal Methods in Natural Language Processing (EMNLP), 2020, pp. 3484–3497.