张津瑜厕所门事件 8分钟视频9分52秒流出

【少儿禁】马建《亮出你的舌苔或空空荡荡》

母子乱伦:和儿子做了,我该怎么办?

快消管培生补招|世界500强雀巢惠氏招聘客户发展管培生!简历直投HR邮箱,一周内回复

除了坚决支持,不知道如何表达。

生成图片,分享到微信朋友圈

自由微信安卓APP发布,立即下载! | 提交文章网址
查看原文

[学术讲座] What’s So Hard About Natural Language Understanding?

南大NLP 2022-04-24
因日程临时有变,佐治亚理工学院Alan Ritter副教授的学术讲座延期举行。原定于2021年6月9日上午8:30-9:30,延期至2021年6月16日上午8:30-9:30,会议链接不变,欢迎各位老师同学参加!

腾讯会议链接:https://meeting.tencent.com/s/K5MfxbPITUuG
会议 ID:636 269 952
会议密码:123321


报告人



Alan Ritter is an associate professor in the School of Interactive Computing at Georgia Tech. His research interests include natural language processing, information extraction, and machine learning. He completed his Ph.D. at the University of Washington and was a postdoctoral fellow in the Machine Learning Department at Carnegie Mellon. 
His research aims to solve challenging technical problems that can help machines learn to read vast quantities of text with minimal supervision. In a recent project, covered by WIRED1, his group built a system that reads millions of tweets for mentions of new software vulnerabilities. He is the recipient of an NSF CAREER award and an Amazon Research Award. 
WIRED1: https://www.wired.com/story/machine-learning-tweets-critical-security-flaws/


讲座提要


In recent years, advances in speech recognition and machine translation (MT) have led to wide adoption, for example, by helping people issue voice commands to their phone and talk with people who do not speak the same language. These advances are possible due to the use of neural network methods on large, high-quality datasets. However, computers still struggle to understand the meaning of language. 
In this talk, I will present two efforts to scale up natural language understanding, drawing inspiration from recent successes in speech and MT. First, I will describe an effort to extract structured knowledge from text, without relying on human labels. Our approach combines the benefits of structured learning and neural networks, accurately predicting latent relation mentions given only indirect supervision from a knowledge base. In extensive experiments, we demonstrate that the combination of structured inference, missing data modeling, and end-to-end learned representations leads to state-of-the-art results on a minimally supervised relation extraction task. 
In the second part of the talk, I will discuss conversational agents that are learned from scratch in a purely data-driven way. To address the challenge of dull responses, which are common in neural dialogue, I will present several strategies that maximize the long-term success of a conversation.


文章有问题?点此查看未经处理的缓存