查看原文
其他

一周活动预告(3.6-3.12)

目录:

  1. Quantum error correction: Foundation and landscape (王东升)


  2. On convergence analysis of an IEQ-based numerical scheme for hydrodynamical Q-tensor model(岳钰堃


  3. Stabilization and accuracy enhancement using artificial neural networks for reduced order models in flow problems (Ramon Codina) 

  4. Stable commutator length and linear programming (陈绿洲)


  5. Learnable Sparsity and Weak Supervision for Data-Efficient, Transparent, and Compact Neural Models (Gonçalo Correia) 

  6. Greedy Algorithm and Projection Pursuit Regression(夏应存)


  7. Workshop on Modeling & Simulations for Complex System 

  8. 第二十届流体力学数值方法研讨会第二轮会议通知 (微信公众号:中国数学会计算数学分会)


  9. 第十三届中国数学会计算数学年会召开通知 (微信公众号:中国数学会计算数学分会)


  10. [CSRC Seminar] Prof. Hua Nie (2023-03-06) (微信公众号:北京计算科学研究中心)


  11. [CSRC Seminar] Prof. Shipeng Mao (2023-03-06) (微信公众号:北京计算科学研究中心)


  12. 离散数学短课程 (微信公众号:东南天元)




1. Quantum error correction: Foundation and landscape


  • 报告人: 王东升 (中科院理论物理研究所

  • 报告时间: 2023-3.6 10:00-11:00

  • 报告链接: 腾讯会议ID: 872 970 146

  • 信息来源: 

  • http://www.amss.cas.cn/mzxsbg/202302/t20230224_6682836.html

  • 报告摘要: 

Quantum error correction theory lies at the heart of universal quantum computing. So far, people do not find or realize good-enough error correction codes yet. In this talk, I will survey the foundation of error correction, stabilizer codes, recent developments, and it's relationship with universal computing models.



(灰色区域内上下滑动阅读全部内容)


2. On convergence analysis of an IEQ-based numerical scheme for hydrodynamical Q-tensor model

  • Speaker: 岳钰堃(卡内基梅隆大学

  • Time: 2023-03-07 10:00-11:00

  • Venue: Tencent Meeting ID: 987-554-773  Meeting Password: 230307

  • Info Source:

  • https://ins.sjtu.edu.cn/seminars/2264

  • Abstract:

This talk will focus on a numerical approach based on the Invariant Quadratization Method(IEQ) to find solutions for a hydrodynamical system. We start with a toy model concerned with the parabolic type Q-tensor equations, numerical design schemes that keep energy dissipation law discretely and analyze its properties. Then we present a convergence analysis of an unconditional energy-stable first-order semi-discrete numerical method intended for the hydrodynamic Q-tensor model. This model couples a Navier-Stokes system for the fluid flows and a parabolic type Q-tensor system governing the nematic liquid crystal director fields. We prove the stability properties of the scheme and show convergence to weak solutions of the coupled fluid crystal system. We will also be able to give you a brief overview of recent results on the existence and regularity of Beris-Edwards systems and other related models.



(灰色区域内上下滑动阅读全部内容)


3. Stabilization and accuracy enhancement using artificial neural networks for reduced order models in flow problems


  • Speaker: Ramon Codina (Universitat Politècnica de Catalunya)

  • Time: 

    2023-3-7, 10:00 EST-0500 (US/Eastern)

  • Registration and Source Link: 


    https://na-g-roms.github.io/seminars/Ramon_Codina_2023.html

  • Abstract:

Reduced Order Models (ROM) in computational mechanics aim at solving problems approximating the solution in spaces of very low dimension. The idea is to first solve the Full Order Model (FOM) in a high-fidelity space and, from its solution, construct the basis of the ROM space. We shall concentrate on the case in which the FOM is solved by means of a Finite Element (FE) method and the ROM is obtained from a Proper Orthogonal Decomposition (POD) of a series of ’snapshots’, i.e., high-fidelity solutions obtained for example at different time instants or for different values of a parameter of the problem to be solved. This way, the ROM solution can be considered to belong to a subspace of the FOM FE space.

The Variational Multi-scale (VMS) idea is to split the unknown into the resolvable component, in our case living in the FE space, and a remainder, called sub-grid scale (SGS). After setting a problem for the SGS, this problem is what is in fact approximated somehow, so that the SGS can be expressed in terms of the FE solution. When the resulting expression is inserted into the equation projected into the FE space, one ends up with a problem for the FE unknown with enhanced stability problems. The first purpose of this talk is to explain why the VMS strategy can be applied quite naturally to the ROM approximation when this is based in a FE method to approximate flow problems. This yields a stable ROM problem.

The second objective of the talk is to explain how accuracy can be improved using Artificial Neural Networks (ANN). Motivated by the structure of the stabilization terms arising from VMS, an additional correcting term is added to enhance accuracy. This term is an ANN trained with the snapshots, i.e., the high-fidelity solutions used to construct the basis of the ROM. Some numerical examples are presented showing the improvement obtained with the correcting terms.


(灰色区域内上下滑动阅读全部内容)


4. Stable commutator length and linear programming


  • 报告人: 陈绿洲 (Purdue University

  • 报告时间: 2023-3.9 9:00-11:00

  • 报告链接: 腾讯会议ID: 471 327 730

  • 信息来源: 

  • http://www.amss.cas.cn/mzxsbg/202302/t20230228_6686610.html

  • 报告摘要: 

Several topological optimization problems involving surfaces can be turned into linear programming problems. In the case of stable commutator length, this was first discovered by Danny Calegari in the case of free groups. I later generalized this to a much broader class of graphs of groups. In this talk, I will explain the idea of turning such topological optimization problems into linear programming problems, and how linear programming duality can be used to obtain sharp lower bounds of stable commutator length (or similar invariants) as a replacement of Bavard's duality.




(灰色区域内上下滑动阅读全部内容)


5. Learnable Sparsity and Weak Supervision for Data-Efficient, Transparent, and Compact Neural Models


  • Speaker: Gonçalo Correia, IST and Priberam Labs

  • Time: 2023-3-9, 1

     Europe/Lisbon 

  • Registration and Source Link: 


    ‍https://mpml.tecnico.ulisboa.pt/seminars?id=6946


  • Abstract:

Neural network models have become ubiquitous in Machine Learning literature. These models are compositions of differentiable building blocks that result in dense representations of the underlying data. To obtain good representations, conventional neural models require many training data points. Moreover, those representations, albeit capable of obtaining a high performance on many tasks, are largely uninterpretable. These models are often overparameterized and give out representations that do not compactly represent the data. To address these issues, we find solutions in sparsity and various forms of weak supervision. For data-efficiency, we leverage transfer learning as a form of weak supervision. The proposed model can perform similarly to models trained on millions of data points on a sequence-to-sequence generation task, even though we only train it on a few thousand. For transparency, we propose a probability normalizing function that can learn its sparsity. The model learns the sparsity it needs differentiably and thus adapts it to the data according to the neural component's role in the overall structure. We show that the proposed model improves the interpretability of a popular neural machine translation architecture when compared to conventional probability normalizing functions. Finally, for compactness, we uncover a way to obtain exact gradients of discrete and structured latent variable models efficiently. The discrete nodes in these models can compactly represent implicit clusters and structures in the data, but training them was often complex and prone to failure since it required approximations that rely on sampling or relaxations. We propose to train these models with exact gradients by parameterizing discrete distributions with sparse functions, both unstructured and structured. We obtain good performance on three latent variable model applications while still achieving the practicality of the approximations mentioned above. Through these novel contributions, we challenge the conventional wisdom that neural models cannot exhibit data-efficiency, transparency, or compactness.


(灰色区域内上下滑动阅读全部内容)


6. Greedy Algorithm and Projection Pursuit Regression

  • 信息来源:

  • http://www.math.zju.edu.cn/2023/0302/c74521a2722959/page.htm


7. Workshop on Modeling & Simulations for Complex System


  • 会议地点: 北京计算科学研究中心

  • 会议时间: 2023 3.25-26 

  • 报名链接: http://csrc-supercomputer.mikecrm.com/EJpVgVM

  • 注册时间: 2023/03/24  14:00-17:00

  • 信息来源: 

  • http://www.csrc.ac.cn/en/event/workshop/2023-03-01/114.html

  • 会议说明: 


本次会议不收取注册费;

会议住宿:

请参会则自行安排住宿,建议住和颐酒店(中关村软件园店,距离中心步行15分钟);

会议联系人:

范颖  邮箱: fanying@csrc.ac.cn;  电话:010-56981715;


欢迎青年学者与高年级研究生参加,会议期间提供中饭。另外设置Poster环节,欢迎青年学者与高年级研究生投稿。



(灰色区域内上下滑动阅读全部内容)





我们是信息搬运工,

若活动临时有变化,

请以网站信息为准。


长按上方二维码,关注我们吧!
联系我们:sharecomptmath@foxmail.com
期待您和我们一起分享信息


欢迎转发,点赞,在看👇

您可能也对以下帖子感兴趣

文章有问题?点此查看未经处理的缓存