查看原文
其他

[学术讲座] What’s So Hard About Natural Language Understanding?

南大NLP 2022-04-24
因日程临时有变,佐治亚理工学院Alan Ritter副教授的学术讲座延期举行。原定于2021年6月9日上午8:30-9:30,延期至2021年6月16日上午8:30-9:30,会议链接不变,欢迎各位老师同学参加!

腾讯会议链接:https://meeting.tencent.com/s/K5MfxbPITUuG
会议 ID:636 269 952
会议密码:123321


报告人



Alan Ritter is an associate professor in the School of Interactive Computing at Georgia Tech. His research interests include natural language processing, information extraction, and machine learning. He completed his Ph.D. at the University of Washington and was a postdoctoral fellow in the Machine Learning Department at Carnegie Mellon. 
His research aims to solve challenging technical problems that can help machines learn to read vast quantities of text with minimal supervision. In a recent project, covered by WIRED1, his group built a system that reads millions of tweets for mentions of new software vulnerabilities. He is the recipient of an NSF CAREER award and an Amazon Research Award. 
WIRED1: https://www.wired.com/story/machine-learning-tweets-critical-security-flaws/


讲座提要


In recent years, advances in speech recognition and machine translation (MT) have led to wide adoption, for example, by helping people issue voice commands to their phone and talk with people who do not speak the same language. These advances are possible due to the use of neural network methods on large, high-quality datasets. However, computers still struggle to understand the meaning of language. 
In this talk, I will present two efforts to scale up natural language understanding, drawing inspiration from recent successes in speech and MT. First, I will describe an effort to extract structured knowledge from text, without relying on human labels. Our approach combines the benefits of structured learning and neural networks, accurately predicting latent relation mentions given only indirect supervision from a knowledge base. In extensive experiments, we demonstrate that the combination of structured inference, missing data modeling, and end-to-end learned representations leads to state-of-the-art results on a minimally supervised relation extraction task. 
In the second part of the talk, I will discuss conversational agents that are learned from scratch in a purely data-driven way. To address the challenge of dull responses, which are common in neural dialogue, I will present several strategies that maximize the long-term success of a conversation.


您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存