查看原文
其他

DiDi X BDD | WAD 2019 Pre-conference Seminar is coming!

点击上方“滴滴科技合作”,选择“置顶公众号”

精彩资讯,即刻送达

The CVPR 2019 Workshop on Autonomous Driving (WAD) aims to gather researchers and engineers from academia and industry to discuss the latest advances in perception for autonomous driving. This pre-conference seminar will feature the presentation of current computer vision and AI research activities at UC Berkeley and DiDi.

Agenda 

Time: 3:30-5:30pm, Apr. 25
Venue: DiDi, Beijing, China
Language: English

15:30-17:30   WAD 2019 Pre-conference Seminar


  • Updates on CVPR 2019 WAD Challenge

  • Adaptive and Curious Deep Learning for Perception, Action, and Explanation, Prof. Trevor Darrell, Director, Berkeley DeepDrive

  • Towards a Better Journey: AI Empowers DiDi In-Vehicle Big Data, Dr. Jieping Ye, Head of DiDi AI Labs, VP of Didi Chuxing & Dr.  Zhengping Che, Senior Research Scientist, DiDi AI Labs 

  • When AI meets AV (Autonomous Vehicles), Dr. Ching-Yao Chan, Co-Director, Berkeley DeepDrive

  • Towards Human-Level Recognition via Contextual, Dynamic, and Predictive Representations, Dr. Fisher Yu, Post-Doctoral Researcher, UC Berkeley


Given the limited capacity of the venue, we might not be able to host all of the registrants. Upon successful signup, you’ll be further noticed about the logistic arrangement. We look forward to seeing you at DiDi! 


Scan the QR Code below or hit the bottom-left "Read more" tab to sign up now!  




Adaptive and Curious Deep Learning for Perception, Action and Explanation  

Speaker

Prof. Trevor Darrell



Bio

Prof. Darrell is on the faculty of the CS and EE Divisions of the EECS Department at UC Berkeley. He leads Berkeley’s DeepDrive (BDD) Industrial Consortia, is co-Director of the Berkeley Artificial Intelligence Research (BAIR) lab, and is Faculty Director of PATH at UC Berkeley. Darrell’s group develops algorithms for large-scale perceptual learning, including object and activity recognition and detection, for a variety of applications including autonomous vehicles, media search, and multimodal interaction with robots and mobile devices. His areas of interest include computer vision, machine learning, natural language processing, and perception-based human computer interfaces. Prof. Darrell previously led the vision group at the International Computer Science Institute in Berkeley, and was on the faculty of the MIT EECS department from 1999-2008, where he directed the Vision Interface Group. He was a member of the research staff at Interval Research Corporation from 1996-1999, and received the S.M., and PhD. degrees from MIT in 1992 and 1996, respectively. He obtained the B.S.E. degree from the University of Pennsylvania in 1988. 

Abstract

Learning of layered or "deep" representations has provided significant advances in computer vision in recent years, but has traditionally been limited to fully supervised settings with very large amounts of training data, where the model lacked interpretability. New results in adversarial adaptive representation learning show how such methods can also excel when learning across modalities and domains, and further can be trained or constrained to provide natural language explanations or multimodal visualizations to their users.  I'll present recent compositional network models that learn instance-specific network structures to solve individual tasks, and models for self-supervised policy learning that leverage intrinsic rewards defined using curiosity.


Towards a Better Journey: AI Empowers DiDi In-Vehicle Big Data 

Speakers

Dr. Jieping Ye




Dr. Zhengping Che


Bio

Dr. Jieping Ye is head of DiDi AI Labs, a VP of Didi Chuxing. He is also an associate professor of University of Michigan, Ann Arbor. His research interests include big data, machine learning, and data mining with applications in transportation and biomedicine. He has served as a Senior Program Committee/Area Chair/Program Committee Vice Chair of many conferences including NIPS, ICML, KDD, IJCAI, AAAI, ICDM, and SDM. He serves as an Associate Editor of Data Mining and Knowledge Discovery, IEEE Transactions on Knowledge and Data Engineering, and IEEE Transactions on Pattern Analysis and Machine Intelligence. He won the NSF CAREER Award in 2010. His papers have been selected for the outstanding student paper at ICML in 2004, the KDD best research paper runner up in 2013, and the KDD best student paper award in 2014.




Dr. Zhengping Che is a senior research scientist at DiDi AI Labs. He received his Ph.D. degree in Computer Science from the University of Southern California. Before that, he received his B.E. degree in ComputerScience from Pilot CS Class (Yao Class), Tsinghua University. His currentresearch interests lie in a wide variety of applications of machine learning,deep learning, and data mining on temporal data and vision data.

Abstract

As the world's leading transportation platform, DiDi has processed a massive amount of data. DiDi AI Labs aims to develop cutting-edge AI technologies and solutions to improve travel safety and experience for millions of people in China and across the world. In this talk, we will discuss some recent progress and explorations in computer vision and intelligent driving in DiDi AI Labs. We will present our work on utilizing and integrating front-facing and internal dashcam videos, GPS/IMU records, and mapping service, building effective solutions to practical computer vision problems including traffic scenario perception and understanding, driving behavior modeling, safety factor assessment and analysis, and using big data and artificial intelligent technologies in applications of smart transportation, driving safety, intelligent vehicles, towards the goal of making a safe and pleasurable journey for everyone.


When AI meets AV (Autonomous Vehicles)

Speaker

Dr. Ching-Yao Chan



Bio

Dr. Ching-Yao Chan is Co-Director, along with Prof. Trevor Darrell and Prof. Kurt Keutzer, of Berkeley DeepDrive. Dr. Chan leads research projects on machine learning applications in autonomous driving and manages the BDD infrastructure projects in data and experimental vehicles.  Dr. Ching-Yao Chan is also a Researcher and a Program Manager at California PATH (Partners for Advanced Transportation Technology). PATH has been a pioneering organization spearheading the field of research on intelligent transportation systems since 1986. At PATH, Dr. Chan leads research projects in vehicle automation, advanced vehicular technologies, human factors, and traffic systems.   Dr. Chan has three decades of research experience in a broad range of automotive and transportation systems. His research spans from the development of driver-assistance and automated driving systems, sensing and wireless communication technologies, and highway network safety assessment.

Abstract

The speaker will present his perspectives on the latest developments of machine learning and its applications to automated driving.  His talk may include the following highlights:

  • Overview of Research Activities at Berkeley DeepDrive (BDD)

  • Discussions of Use Cases of Machine Learning for Autonomous Driving

  • Challenges and Prospects of AI and AV


Towards Human-Level Recognition via Contextual, Dynamic, and Predictive Representations

Speaker

Dr. Fisher Yu


Bio

Dr. Fisher Yu is a postdoctoral researcher at UC Berkeley. He pursued his Ph.D. degree at Princeton University. He obtained his bachelor degree from the University of Michigan, Ann Arbor. His research interest lies in image representation learning and interactive data processing systems. His works focus on seeking connections between computer vision problems and building unified image representation frameworks. Through the lens of image representation, he is also studying high-level understanding of dynamic 3D scenes. He serves as reviewers for major conferences and journals in computer vision, machine learning, and robotics. He has also led the organization of multiple CVPR workshops. More information about his works can be found at his homepage: https://www.yf.io.

Abstract

Existing state-of-the-art computer vision models usually specialize in single domains or tasks, while human-level recognition can be contextual for diverse scales and tasks. This specialization isolates different vision tasks and hinders deployment of robust and effective vision systems.  In this talk, I will discuss contextural image representations for different scales and tasks through the lens of pixel-level prediction. These connections, built by the study of dilated convolutions and deep layer aggregation, can interpret convolutional network behaviors and lead to model frameworks applicable to a wide range of tasks. Beyond contextual, I will argue that image representation should also be dynamic and predictive. I will illustrate the case with input-dependent dynamic networks, which lead to new insights into the relationship of zero-shot/few-shot learning and network pruning, and with semantic predictive control, which utilizes prediction for better driving policy learning. To conclude, I will discuss the on-going system and algorithm investigations which couple representation learning and real-world interaction to build intelligent agents that can continuously learn from and interact with the world.

相关阅读

滴滴 X BDD启动CVPR 2019 WAD自动驾驶挑战赛




 编辑 | 熊书乔


    您可能也对以下帖子感兴趣

    文章有问题?点此查看未经处理的缓存