AIRS in the AIR
AIRS in the AIR | 机器学习与优化方法(四)

十月,AIRS in the AIR 邀请国内外顶级学者围绕机器学习与优化方法及其应用开展讲座。下周二将迎来本系列的收官讲座,欢迎观看直播与嘉宾实时交流。
第一位报告嘉宾 Yongqiang Wang 是克莱姆森大学电气与计算机工程副教授,他担任《IEEE自动控制汇刊》《IEEE网络系统控制汇刊》等期刊编委,曾获第十七届 IFAC 世界大会 Young Author Prize。
第二位报告嘉宾 Mert Gurbuzbalaban 是罗格斯大学管理科学与信息系统系副教授,他多次在 SIAM J. Optim.、NeurIPS 等期刊会议发表论文。
点击链接报名参加:http://hdxu.cn/zJ2iQ,或通过ZOOM(https://us02web.zoom.us/meeting/register/tZIoceiuqzgjHdLy-QixX_KJbVxI3sKbuKK-)/Bilibili(http://live.bilibili.com/22587709)参与。
呼吸新鲜空气,了解前沿科技!AIRS 重磅推出 系列活动 AIRS in the AIR。每周二与您相约线上,一起探索人工智能与机器人领域的前沿技术、产业应用、发展趋势。
-
查宏远香港中文大学(深圳)校长学勤讲座教授、数据科学学院执行院长、AIRS 机器学习与应用中心主任执行主席
-
濮实香港中文大学(深圳)数据科学学院助理教授、AIRS 机器学习与应用中心副研究员主持人
-
李肖香港中文大学(深圳)数据科学学院助理教授、AIRS 机器学习与应用中心副研究员主持人
-
Yongqiang Wang克莱姆森大学电气与计算机工程副教授Inherent Privacy for Distributed Optimization and Learning
Yongqiang Wang received the dual B.S. degrees in electrical engineering & automation and computer science & technology from Xi'an Jiaotong University, Xi'an, Shanxi, China, in 2004, and the Ph.D. degree in control science and engineering from Tsinghua University, Beijing, China, in 2009. From 2007-2008, he was with the University of Duisburg-Essen, Germany, as a visiting student. He was a project scientist at the University of California, Santa Barbara before joining Clemson University, SC, USA, where he is currently an Associate Professor. His current research interests include distributed control, optimization, and learning, with an emphasis on privacy protection. He currently serves as an associate editor for IEEE Transactions on Automatic Control, IEEE Transactions on Control of Network Systems, and IEEE Transactions on Signal and Information Processing over Networks.
Distributed (stochastic) optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing. Since involved data usually contain sensitive information like users’ locations, healthcare records, or financial transactions, privacy protection has become an increasingly pressing need in the implementation of distributed optimization and learning algorithms. However, existing privacy solutions usually incur heavy communication/computation overhead or sacrifice optimization/learning accuracy. We propose to judiciously embed stochasticity on the algorithmic level to enable privacy without incurring heavy communication/computation overhead or accuracy loss. Besides rigorous theoretical analysis, simulation results as well as numerical experiments on a benchmark machine learning dataset confirm the effectiveness of the proposed approach.
-
Mert Gurbuzbalaban罗格斯大学管理科学与信息系统系副教授Robust and Risk-Averse Accelerated Gradient Methods
Mert Gurbuzbalaban is an associate professor at Rutgers University. Previously, he was an assistant professor at Rutgers University and a postdoctoral associate at the Laboratory for Information and Decision Systems (LIDS) at MIT. He is broadly interested in optimization and computational science driven by applications in network science and data science. He received his B.Sc. degrees in Electrical Engineering and Mathematics as a valedictorian from Bogazici University, Istanbul, Turkey, the “Diplôme d’ingénieur” degree from École Polytechnique, France, and the M.S. and Ph.D. degrees in Mathematics from the Courant Institute of Mathematical Sciences, New York University.
Dr. Gurbuzbalaban is a recipient of the Dean's Research Award, Dean's Young Research Award and Dean's Summer Fellowship Award at the Rutgers Business School and a co-recipient of the Honorable Mention for the Best Paper Award at the International Conference in Machine Learning (ICML 2019). He also received the Kurt Friedrichs Prize (given by the Courant Institute of New York University for an outstanding thesis) in 2013, Bronze Medal in the École Polytechnique Scientific Project Competition in 2006, the Nadir Orhan Bengisu Award (given by the electrical-electronics engineering department of Bogazici University to the best graduating undergraduate student) in 2005 and the Bulent Kerim Altay Award from the Electrical-Electronics Engineering Department of Middle East Technical University in 2001. He served as a special issue editor for the Probability in the Engineering and Informational Sciences journal, as a member of the Informs Nicholson Prize Committee in 2021 and as a track chair in Operations Research for the Institute of Industrial and Systems Engineering (IISE) Conference in 2019.
In the context of first-order algorithms subject to random gradient noise, we study the trade-offs between the convergence rate (which quantifies how fast the initial conditions are forgotten) and the "risk" of suboptimality, i.e. deviations from the expected suboptimality. We focus on a general class of momentum methods (GMM) which recover popular methods such as gradient descent (GD), accelerated gradient descent (AGD), and heavy-ball (HB) method as special cases depending on the choice of GMM parameters. We use well-known risk measures "entropic risk" and "entropic value at risk" to quantify the risk of suboptimality. For strongly convex smooth minimization, we first obtain new convergence rate results for GMM with a unified theory that is also applicable to both AGD and HB, improving some of the existing results for HB. We then provide explicit bounds on the entropic risk and entropic value at risk of suboptimality at a given iterate which also provides direct bounds on the probability that the suboptimality exceeds a given threshold based on Chernoff's inequality. Our results unveil fundamental trade-offs between the convergence rate and the risk of suboptimality. We then plug the entropic risk and convergence rate estimates we obtained in a computationally tractable optimization framework and propose entropic risk-averse GMM (RA-GMM) and entropic risk-averse AGD (RA-AGD) methods which can select the GMM parameters to systematically trade-off the entropic value at risk with the convergence rate. We show that RA-AGD and RA-GMM lead to improved performance on quadratic optimization and logistic regression problems compared to the standard choice of parameters. To our knowledge, our work is the first to resort to coherent measures to design the parameters of momentum methods in a systematic manner.
时间 | 环节 | 嘉宾与题目 |
---|---|---|
8:00-9:00 |
主题报告 |
Yongqiang Wang,克莱姆森大学 |
9:00-10:00 |
主题报告 |
Mert Gurbuzbalaban,罗格斯大学 |
视频回顾