查看原文
其他

【直播】2021华中科技大学计算机学院青年学者论坛暨“TIME·青椒”沙龙第三期

蔻享学术 2022-07-02





为了加强我院青年学者与新加坡国立大学计算机系青年学者间的学术交流与合作,更好地推动我院计算机科学与技术学科的发展,同时加强与新加坡国立大学计算机学院之间的协同合作,华中科技大学计算机学院拟于2021年12月23日上午8:30-11:30在线上召开“2021华中科技大学计算机学院与新加坡国立大学计算机学院青年学者联合论坛暨‘TIME·青椒’沙龙第三期” (2021 Joint Symposium on Hot Topics On Computer System Research),本次活动由计算机学院青年教师联合会协办
论坛旨在为计算机学科海内外青年才俊搭建学术交流平台和学术成果展示平台,通过专题报告、学术研讨和提问交流等形式,围绕计算机系统国际科学前沿热点研究领域展开研讨,促进海内外青年学者交流,增强计算机学科研究领域的国际交流与合作,助力青年教师发展。

直播信息


论坛标题

2021华中科技大学计算机学院青年学者论坛暨“TIME·青椒”沙龙第三期

报告时间

2021年12月23日上午8:30-11:30

协办方

计算机学院青年教师联合会

直播二维码




直播海报


报告人介绍


报告一



Bingsheng He(National University of Singapore,Dean’s Chair Associate Professor)


主题:Parallel Graph Processing Systems on Heterogeneous Architectures
个人简介:Bingsheng He , Dean’s Chair Associate Professor and Vice-Dean ,School of Computing, NUS. His current research interests include cloud computing, database systems and high performance computing. He is the recipient of ACM distinguished member and has served in editor board of international journals, including IEEE Transactions on Cloud Computing , IEEE Transactions on Parallel and Distributed Systems , IEEE Transactions on Knowledge and Data Engineering , Springer Journal of Distributed and Parallel Databases and ACM Computing Surveys

报告摘要:Graphs are de facto data structures for many data processing applications, and their volume is ever growing. Many graph processing tasks are computation intensive and/or memory intensive. Therefore, we have witnessed a significant amount of effort in accelerating graph processing tasks with heterogeneous architectures like GPUs, FPGAs and even ASIC. In this talk, we will first review the literatures of large graph processing systems on heterogeneous architectures. Next, we demonstrate the significant performance impact of hardware-software co-design on designing high performance graph computation systems and applications. Finally, we outline the research agenda on challenges and opportunities in the system and application development of future graph processing.


报告二



Yuchong Hu(Huazhong University of Science and Technology,Professor of the School of Computer Science and Technology)
主题:Exploiting Combined Locality for Wide-Stripe Erasure Coding in Distributed Storage


个人简介:Yuchong Hu is a Professor of the School of Computer Science and Technology at the Huazhong University of Science and Technology (HUST). His research mainly focuses on designing and implementing storage systems with both high performance and dependability based on erasure coding and deduplication techniques, where the storage systems include cloud storage, big data storage, in-memory KV stores, backup storage, blockchain storage, etc.

报告摘要:Erasure coding is a low-cost redundancy mechanism for distributed storage systems by storing stripes of data and parity chunks. Wide stripes are recently proposed to suppress the fraction of parity chunks in a stripe to achieve extreme storage savings. However, wide stripes aggravate the repair penalty, while existing repair-efficient approaches for erasure coding cannot effectively address wide stripes. In this paper, we propose combined locality, the first mechanism that systematically addresses the wide-stripe repair problem via the combination of both parity locality and topology locality. We further augment combined locality with efficient encoding and update schemes. Experiments on Amazon EC2 show that combined locality reduces the single-chunk repair time by up to 90.5% compared to locality-based state-of-the-arts, with only a redundancy of as low as 1.063×.


报告三


Yang You(National University of Singapore,Presidential Young Professor, School of Computing)
主题:Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training

个人简介:Yang You,Presidential Young Professor, School of Computing, NUS. His research interests include Parallel/Distributed Algorithms, High Performance Computing, and Machine Learning. The focus of his current research is scaling up deep neural networks training on distributed systems or supercomputers. In 2017, his team broke the world record of ImageNet training speed.In 2019, his team broke the world record of BERT training speed. He is a winner of IPDPS 2015 Best Paper Award (0.8%), ICPP 2018 Best Paper Award (0.3%) , ACM/IEEE George Michael HPC Fellowship, IEEE CS TCHPC Early Career Researchers Award for Excellence in High Performance Computing, Siebel Scholar and Lotfi A. Zadeh Prize. He also made Forbes 30 Under 30 Asia list.

报告摘要:The Transformer architecture has improved the performance of deep learning models in domains such as Computer Vision and Natural Language Processing. Together with better performance come larger model sizes. This imposes challenges to the memory wall of the current accelerator hardware such as GPU. There is an urgent demand to train models in a distributed environment. However, distributed training, especially model parallelism, often requires domain expertise in computer systems and architecture.
In this talk, we will introduce Colossal-AI, which is a unified parallel training system designed to seamlessly integrate different paradigms of parallelization techniques including data parallelism, pipeline parallelism, multiple tensor parallelism, and sequence parallelism. 


报告四


Jialin Li(National University of Singapore ,Assistant Professor in the School of Computing)主题:Co-Designing Distributed Systems with Programmable Networking Hardware
个人简介:Jialin Li, Assistant Professor, School of Computing, NUS. He finished his PhD at the University of Washington in 2019 and received his B.S.E in Computer Engineering from the University of Michigan in 2012. As part of his dissertation work, he has built practical distributed systems that offer both strong semantics and high performance, by co-designing with new-generation programmable hardware. He is the recipient of best paper awards at OSDI and NSDI.

摘要:Software processing on CPUs has become the performance bottleneck of many large scale distributed systems deployed in data centers. In this talk, we will introduce a new approach to designing distributed systems in data centers that tackle the aforementioned challenge -- by co-designing distributed systems with the data center network. Specifically, my work has taken advantage of new-generation programmable switches in data centers to build novel network-level primitives with near-zero processing overhead. We then leverage these primitives to enable more efficient protocol and system designs. I will describe several systems we built that demonstrate the benefit of this approach and will  end the talk with our most recent work on accelerating permissioned blockchain systems using this co-design approach.


报告五


Liangfeng Cheng,Ph.D student主题:LogECMem: Coupling Erasure-Coded In-Memory Key-Value Stores with Parity Logging
个人简介:Liangfeng Cheng is now a Ph.D student in Huazhong University of Science and Technology (HUST) advised by Prof. Yuchong Hu. He received the B.Eng. degree in Huazhong University of Science and Technology (HUST) in 2017. Then, he took a successive postgraduate and doctoral program in 2019. His research mainly focuses on computer architecture and cloud storage including erasure coding, in-memory key-value stores, data deduplication, etc.
报告摘要:In-memory key-value stores are often used to speed up Big Data workloads on modern HPC clusters. To maintain their high availability, erasure coding has been recently adopted as a low-cost redundancy scheme instead of replication. Existing erasure-coded update schemes, however, have either low performance or high memory overhead. In this paper, we propose a novel parity logging-based architecture, HybridPL, which creates a hybrid of in-place update (for data and XOR parity chunks) and log-based update (for the remaining parity chunks), so as to balance the update performance and memory cost, while maintaining efficient single-failure repairs. We realize HybridPL as an in-memory key-value store called LogECMem, and further design efficient repair schemes for multiple failures. We prototype LogECMem and conduct experiments on different workloads. We show that LogECMem achieves better update performance over existing erasure-coded update schemes with low memory overhead, while maintaining high basic I/O and repair performance.

报告六


Yu Huang,Ph.D. student
主题:Accelerating Graph Convolutional Networks Using Crossbar-based Processing-In-Memory Architectures
个人简介:Yu Huang is a CS Ph.D. student at Huazhong University of Science and Technology, supervised by Prof. Xiaofei Liao and Long Zheng. His research interests focus on computer architecture and systems, especially processing-in-memory and graph processing. He received the BS degree from the Huazhong University of Science and Technology in 2016.
报告摘要:Graph convolutional networks (GCNs) are promising to enable machine learning on graphs. GCNs exhibit mixed computational kernels, involving regular neural-network-like computing and irregular graph-analytics-like processing. Existing GCN accelerators follow a divide-and-conquer philosophy to architect two separate types of hardware to accelerate these two types of GCN kernels, respectively. This hybrid architecture improves intra-kernel efficiency but considers little inter-kernel interactions in a holistic view for improving overall efficiency. In this work, we present a new GCN accelerator, ReFlip, with three key innovations in terms of architecture design, algorithm mappings, and practical implementations. First, ReFlip leverages PIM-featured crossbar architectures to build a unified architecture for supporting the two types of GCN kernels simultaneously. Second, ReFlip adopts novel algorithm mappings that can maximize potential performance gains reaped from the unified architecture by exploiting the massive crossbar-structured parallelism. Third, ReFlip assembles software/hardware co-optimizations to process real-world graphs efficiently. Results show that ReFlip can significantly outperform the state-of-the-art CPU, GPU, and accelerator solutions in terms of both performance and energy efficiency.

报告七


Zhong Yang,(Ph.D student)主题:Efficient Algorithms for Set/String Similarity Search
个人简介:Yang Zhong is a Ph.D student in the School of Computer Science and Technology, Huazhong University of Science and Technology. His research interests are data management and data science with a focus on developing novel algorithm for set/string similarity search. He develops adaptive algorithms for top-k set similarity joins and efficient algorithms for string similarity search under edit distance. He received his Master degree in Nanyang Technological University.
报告摘要:Set/String similarity search is one of the essential operating in data processing, it has a broad range of applications including data cleaning, near-duplicate object detection and data integration. The top-k set similarity join is a variant of the threshold-based set similarity join to avoid the problem of setting an appropriate threshold before-hand. The state-of-the-art solution for top-k set similarity join disregards the effect of the so-called step size, which is the number of elements accessed in each iteration of the algorithm. We propose an adaptive step size algorithm that is capable of automatically adjusting the step size, thus taking advantage of the benefits of large step sizes as well as reducing redundant computations. We also study the threshold string similarity search problem under edit distance. Previous proposals for the problem suffer from huge space consumption issue when achieving only an acceptable efficiency, especially for long strings. To eliminate the issue, we propose a simple and small index, called minIL. We first adopt a minhash family to capture pivot characters and to construct sketch representations for strings, and then develop a succinct multi-level inverted index to search the sketches with low space cost and high efficiency.

扩展阅读

 

1. 中国微米纳米技术学会学术年会暨国际会议

2. 北京大学李彦教授:单壁碳纳米管的结构可控合成

3. 中国科大程光磊教授:可重构关联氧化物纳米电子学

4. 诺奖得主Wilczek科普专栏

5. 天文王善钦专栏

编辑:黄琦

蔻享学术平台,国内领先的一站式科学资源共享平台,依托国内外一流科研院所、高等院校和企业的科研力量,聚焦前沿科学,以优化科研创新环境、传播和服务科学、促进学科交叉融合为宗旨,打造优质学术资源的共享数据平台。



版权说明:未经授权严禁任何形式的媒体转载和摘编,并且严禁转载至微信以外的平台!


原创文章首发于蔻享学术,仅代表作者观点,不代表蔻享学术立场。

转载请在『蔻享学术』公众号后台留言。


点击阅读原文~发现惊喜!

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存