讲座信息
 日期

2020年12月04日06日
 地点
 港中大（深圳）, ZOOM会议
12月4日至6日（本周五至周日），优化、概率与模拟研讨会（Workshop on Optimization Probability and Simulation，简称 WOPS20) 将在香港中文大学（深圳）校内举办，线上 ZOOM 会议同步进行。WOPS20 是由深圳市人工智能与机器人研究院（AIRS）主办的国际研讨会，旨在为国内外顶尖学者和年轻的研究员提供一个思想交流、探索合作、讨论最新研究成果的平台。
WOPS20 邀请10位国内外著名高校教授学者和两位博士生进行讲座分享，AIRS 项目负责人、香港中文大学（深圳）数据科学学院 Andre Milzarek 教授将主持研讨会。
Andre Milzarek
AIRS项目负责人、香港中文大学（深圳）数据科学学院助理教授
开场致辞
Andre Milzarek 教授是 AIRS 项目负责人、香港中文大学（深圳）数据科学学院助理教授。他曾在北京大学北京国际数学研究中心担任博士后研究员。2013年和2016年，Andre Milzarek 教授在德国慕尼黑工业大学 Michael Ulbrich 教授的指导下，以优异的成绩获得硕士学位和博士学位。
Andre Milzarek 教授主要的研究方向和兴趣包括非光滑优化、大规模随机优化、二阶方法和理论。他曾受邀在工业和应用数学学会（AISM）、数学编程国际研讨会（ISMP）等国际会议上发表报告，并在多份期刊上发表论文。Andre Milzarek 教授从2010年至2012年获德国巴伐利亚州MaxWeber项目资助，2017年获北京大学博雅博士后奖学金。
Prof. Andre Milzarek will give a welcome and opening speech of WOPS20.
Daniel Rudolf
Junior Professor (W1) for Mathematical Statistics, Georg AugustUniversität Göttingen
Spectral Gap of Slice Sampling
Advanced academic qualifications
2011 Dr. rer nat. in Mathematics, FriedrichSchillerUniversität
Jena, Advisor: E. Novak
Postgraduate professional career
since 2016 Junior Professor (W1) for Mathematical Statistics, GeorgAugustUniversität Göttingen
2016 Research Associate, FriedrichSchillerUniversität Jena
2015–2016 Substituting the Professor for Mathematical Economics,Technische Universität Chemnitz
2013–2015 Research Associate, FriedrichSchillerUniversität Jena
2012–2013 Postdoctoral Research Fellow, The University of New South Wales, Sydney
2011–2012 Postdoctoral Researcher, FriedrichSchillerUniversität Jena
We provide results on Wasserstein contraction of simple slice sampling for approximate sampling with respect to distributions with logconcave and rotational invariant Lebesgue densities. This yields, in particular, an explicit quantitative lower bound of the spectral gap of simple slice sampling. Moreover, this lower bound carries over to more general target distributions depending only on the volume of the (super)level sets of their unnormalized density. This allows us to deduce convergence results of hybrid slice sampling approaches.
Viacheslav Natarovskii
GeorgAugustUniversität Göttingen
Elliptical Slice Sampler and its Convergence
Ernest Ryu
Assistant Professor in the Department of Mathematical Sciences at Seoul National University
Minimax Optimization in Machine Learning
Ernest Ryu is an assistant professor in the Department of Mathematical Sciences at Seoul National University. He is an affiliated faculty of the Interdisciplinary Program of Computational Science and Technology and the Graduate School of Data Science. His current research focus is on convex optimization and machine learning.
Professor Ryu received a BS degree in Physics and Electrical engineering with honor at the California Institute of Technology in 2010 and an MS in Statistics and PhD in Computational and Mathematical Engineering at Stanford University in 2016. In 2016, he joined the Department of Mathematics at the University of California, Los Angeles as an Assistant Adjunct Professor. In 2020 he joined the Department of Mathematical Sciences at Seoul National University as a tenuretrack faculty.
In this talk, we will talk about generative adversarial networks and convergence guarantees in stochastic „minimax“ optimization under the assumption of convexconcavity. Specifically, we establish almost sure convergence and L2 convergence in the lastiterates and present optimal accelerated complexity of minimizing gradients in the noiseless setting.
Björn Sprungk
Technische Universität Bergakademie Freiberg
Noiselevel Robust Monte Carlo Methods for Bayesian Inference with Informative Data
The Bayesian approach to inverse problems provides a rigorous framework for the incorporation and quantification of uncertainties in measurements, parameters and models. However, sampling from or integrating w.r.t. the resulting posterior measure can become computationally challenging. In recent years, a lot of effort has been spent on deriving dimensionindependent methods and to combine efficient sampling strategies with multilevel or surrogate methods in order to reduce the computational burden of Bayesian inverse problems.
In this talk, we are interested in designing numerical methods which are robust w.r.t. the size of the observational noise, i.e., methods which behave well in case of concentrated posterior measures. The concentration of the posterior is a highly desirable situation in practice, since it relates to informative or large data. However, it can pose as well a significant computational challenge for numerical methods based on the prior or reference measure. We propose to employ the Laplace approximation of the posterior as the base measure for numerical integration in this context. The Laplace approximation is a Gaussian measure centered at the maximum aposteriori estimate (MAPE) and with covariance matrix depending on the Hessian of the log posterior density at the MAPE. We discuss convergence results of the Laplace approximation in terms of the Hellinger distance and analyze the efficiency of Monte Carlo methods based on it. In particular, we show that Laplacebased MetropolisHastings algorithms as well as Laplacebased importance sampling and quasiMonteCarlo are robust w.r.t. the concentration of the posterior for large classes of posterior distributions and integrands whereas priorbased Monte Carlo sampling methods are not.
Yingjie (Tom) Fei
Postdoc Researcher at Northwestern University
Exponential Rates in Community Detection and Reinforcement Learning
Tom is a postdoc researcher at Northwestern University hosted by Jorge Nocedal. Previously he received his PhD from Cornell University. His research interests lie in the intersection of machine learning, optimization and statistics. His work has been awarded the 2nd place overall (and 1st place in the US) in 2018 INFORMS George Nicholson Student Paper Competition.
In the first part of the talk, we will introduce the community detection problem under the stochastic block model. After discussing several popular algorithms, we will present an algorithm based on semidefinite programming (SDP). We show that despite being a relaxation, this algorithm achieves the optimal Bayes error rate in terms of distance to the target solution, which decays exponentially in the signaltonoise ratio. Moreover, this algorithm is robust under the socalled semirandom model that frustrates many existing algorithms. Our proof involves a novel primaldual analysis of SDP, which reveals that SDP tightly approximates the optimal (yet unimplementable) joint majority voting procedure.
In the second part of the talk, we will discuss the problem of risksensitive reinforcement learning under the episodic Markov decision process, in which an agent attempts to learn the optimal policy with respect to the entropic risk measure. We show that this problem admits a regret lower bound that scales exponentially in the risk sensitivity of the agent and the horizon length. We then propose an algorithm based on Qlearning and provide a regret upper bound for the algorithm that nearly matches the lower bound. Furthermore, the exponential regret is shown to represent a fundamental tradeoff between the risksensitivity of the agent and sample complexity of the algorithm.
Huajie Qian
Alibaba DAMO Academy, Seattle
Learning Prediction Intervals for Regression: Generalization and Calibration
Huajie Qian recently obtained his Ph.D. in Operations Research from Columbia University, advised by Henry Lam. His research borrows tools from statistics and machine learning to develop datadriven methodologies for stochastic simulation and optimization that can deal with uncertainties from data in an efficient and principled way. He is currently working at Alibaba DAMO Academy.
Accurate uncertainty and error quantification is crucial for building reliable machine learning systems and riskbased decisionmaking. Conventional methods, however, can suffer from overconservativeness, inaccurate coverages and scalability issues. To overcome these, we study a recent framework to learn prediction intervals based on empirical constrained optimization that minimizes the average interval width while maintaining a correct coverage level. We establish statistical guarantees of this approach in relation to the complexity of our interval model. We also design a provably accurate validation procedure based on highdimensional central limit theorems. We empirically demonstrate the strengths of our interval generation and calibration algorithms in terms of testing performances compared to existing benchmarks.
Xiaowei Zhang
Assistant Professor in Faculty of Business and Economics, University of Hong Kong
Sample and Computationally Efficient Simulation Metamodeling in High Dimensions
Xiaowei Zhang is an assistant professor In Faculty of Business and Economics, University of Hong Kong. He received his PhD in Management Science and Engineering from Stanford University and B.S. in Mathematics from Nankai University. His recent research interests include high dimensional simulation and optimization.
Stochastic kriging has been widely employed for simulation metamodeling to predict the response surface of a complex simulation model. However, its use is limited to cases where the design space is lowdimensional, because the number of design points required for stochastic kriging to produce accurate prediction, in general, grows exponentially in the dimension of the design space. The large sample size results in both a prohibitive sample cost for running the simulation model and a severe computational challenge due to the need of inverting large covariance matrices. Based on tensor Markov kernels and sparse grid experimental designs, we develop a novel methodology that dramatically alleviates the curse of dimensionality. We show that the sample complexity of the proposed methodology grows very mildly in the dimension, even under model misspecification. We also develop fast algorithms that compute stochastic kriging in its exact form without any approximation schemes. We demonstrate via extensive numerical experiments that our methodology can handle problems with a design space of hundreds of dimensions, improving both prediction accuracy and computational efficiency by a significant margin relative to typical alternative methods in practice.
Yang Liu
PhD candidate in the School of Mathematics and Physics at University of Queensland
Stability Analysis of NewtonMR under Hessian Perturbations
Yang Liu is a PhD candidate in the School of Mathematics and Physics at University of Queensland (UQ) and supervised by Fred Roosta. Yang Liu is a member of the ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS). He is awarded the UQ Research Training Scholarship during his PhD. He is also awarded the 2020 UQ Tutor Award and 2020 UQ student Publication Award.
Recently, stability of NewtonCG under Hessian perturbations, i.e., inexact curvature information, have been extensively studied. Such stability analysis has subsequently been leveraged in designing variants of NewtonCG in which, to reduce the computational costs involving the Hessian matrix, the curvature is suitably approximated. Here, we do that for NewtonMR, which extends NewtonCG in the same manner that MINRES extends CG, but can be applied beyond the traditional convex settings, to invex problems. Unlike the stability analysis of NewtonCG, which relies on spectrum preserving perturbations in the sense of Löwner partial order, our work here draws from matrix perturbation theory to estimate the distance between the underlying exact and perturbed subspaces. Numerical experiments demonstrate great degree of stability for NewtonMR, mounting to a highly efficient algorithm in largescale problems.
Anthony ManCho So
Professor in the Department of Systems Engineering and Engineering Management, The Chinese University of Hong Kong
NonConvex Orthogonal Group Synchronization via the Generalized Power Method
Anthony ManCho So received his BSE degree in Computer Science from Princeton University with minors in Applied and Computational Mathematics, Engineering and Management Systems, and German Language and Culture. He then received his MSc degree in Computer Science and his PhD degree in Computer Science with a PhD minor in Mathematics from Stanford University. Dr. So joined The Chinese University of Hong Kong (CUHK) in 2007. He is now Professor in the Department of Systems Engineering and Engineering Management. His research focuses on optimization theory and its applications in various areas of science and engineering, including computational geometry, machine learning, signal processing, and statistics.
Dr. So is appointed as an Outstanding Fellow of the Faculty of Engineering at CUHK in 2019. He has received a number of research and teaching awards, including the 2018 IEEE Signal Processing Society Best Paper Award, the 2015 IEEE Signal Processing Society Signal Processing Magazine Best Paper Award, the 2014 IEEE Communications Society AsiaPacific Outstanding Paper Award, the 2010 INFORMS Optimization Society Optimization Prize for Young Researchers, and the 2013 CUHK ViceChancellor's Exemplary Teaching Award.
The group synchronization problem calls for the estimation of a groundtruth vector from the noisy relative transforms of its elements, where the elements come from a group and the relative transforms are computed using the binary operation of the group. Such a problem provides an abstraction of a wide range of inverse problems that arise in practice. However, in many instances, one needs to tackle a nonconvex optimization formulation. It turns out that for synchronization problems over certain subgroups of the orthogonal group, a simple projected gradienttype algorithm, often referred to as the generalized power method (GPM), is quite effective in finding the groundtruth when applied to their nonconvex formulations. In this talk, we survey the related recent results in the literature and focus in particular on the techniques for analyzing the statistical and optimization performance of the GPM.
This talk covers joint works with Huikang Liu, Peng Wang, ManChung Yue, and Zirui Zhou.
Andi Wang
University of Bristol
QuasiStationary Monte Carlo Methods via Stochastic Approximation
A new class of Monte Carlo algorithms, quasistationary Monte Carlo (QSMC), designed for exact Bayesian inference on large datasets, was recently presented in [Pollock et al (2020), JRSSB]. In my talk I will describe this class of methods, and the associated probabilistic concept of quasistationarity, which concerns the distribution of an extant stochastic population. I will present an alternative approach to QSMC based on stochastic approximation, rather than interacting particle systems, which is conceptually more straightforward and can be significantly more straightforward to implement, while retaining the desirable properties of exactness and scalability.
Joint work with Murray Pollock, Gareth Roberts, David Steinsaltz.
Wenpin Tang
Assistant Professor of Columbia IEOR
Functional Inequalities of Infinite Swapping Algorithm: Theory and Application
Wenpin Tang is currently assistant professor of Columbia IEOR. He is working at the intersection of probability, statistics, machine learning and financial technology.
Sampling Gibbs measures at low temperature is a very important task but computationally very challenging. Numeric evidence suggests that the infiniteswapping algorithm (isa) is a promising method. The isa can be seen as an improvement of replica methods which are very popular. We rigorously analyze the ergodic properties of the isa in the low temperature regime deducing EyringKramers formulas for the spectral gap (or Poincaré constant) and the logSobolev constant. Our main result shows that the effective energy barrier can be reduced drastically using the isa compared to the classical overdamped Langevin dynamics. As a corollary we derive a deviation inequality showing that sampling is also improved by an exponential factor. Furthermore, we analyze simulated annealing for the isa and show that isa is again superior to the overdamped Langevin dynamics. This is joint work with Georg Menz, André Schlichting and Tianqi Wu.
Fred Roosta
Faculty Member in the School of Mathematics and Physics at the University of Queensland
Reproducing Stein Kernel Approach for PostHoc Correction of Approximate Sampling Algorithms
Fred Roosta is a data science faculty member in the School of Mathematics and Physics at the University of Queensland (UQ). He is also a research scientist at the International Computer Science Institute (ICSI) in Berkeley, California. In addition, he holds an associate investigator position with the ARC Centre of Excellence for Mathematical and Statistical Frontiers (ACEMS). In 2018, he was awarded the Discovery Early Career Researcher Award (DECRA) by the Australian Research Council for his research on SecondOrder Optimization for Machine Learning. Prior to joining UQ, he was a postdoctoral fellow in the Department of Statistics and Department of Computer Science at the University of California, Berkeley, where he was supervised by Professor Michael Mahoney. He obtained his PhD from the University of British Columbia in 2015 under the supervision for Professor Uri Ascher.
We discuss a posthoc rejectionfree method of obtaining a consistent estimator for quantities related to a target distribution of interest by using samples obtained from an ergodic Markov chain with an arbitrary stationary distribution. The approach involves Stein importance sampling, based on minimization of the kernelized Stein discrepancy, and is shown to be valid under certain conditions on the mixing of the chain. To demonstrate the practical implications of the method, we show these conditions are satisfied for a large number of unadjusted samplers. We also conduct a numerical study showing that the method is superefficient.
时间  环节  主讲嘉宾  地点 

2020.12.04 17:00 – 17:10  开场辞  Andre Milzarek  CUHKShenzhen, ZOOM 
2019.12.04 17:10 – 17:55  Spectral Gap of Slice Sampling  Daniel Rudolf  CUHKShenzhen, ZOOM 
2020.12.04 17:55 – 18:25  Elliptical Slice Sampler and its Convergence  Viacheslav Natarovskii  CUHKShenzhen, ZOOM 
2020.12.04 18:25 – 19:00  Snack Break  CUHKShenzhen, ZOOM  
2020.12.04 19:00 – 19:45  Minimax Optimization in Machine Learning  Ernest Ryu  CUHKShenzhen, ZOOM 
2020.12.04 19:45 – 20:30  Noiselevel Robust Monte Carlo Methods for Bayesian Inference with Informative Data  Björn Sprungk  CUHKShenzhen, ZOOM 
2020.12.05 09:15 – 10:00  Exponential Rates in Community Detection and Reinforcement Learning  Yingjie (Tom) Fei  CUHKShenzhen, ZOOM 
2020.12.05 10:00 – 10:45  Learning Prediction Intervals for Regression: Generalization and Calibration  Huajie Qian  CUHKShenzhen, ZOOM 
2020.12.05 10:45 – 11:15  Coffee Break  CUHKShenzhen, ZOOM  
2020.12.05 11:15 – 12:00  Sample and Computationally Efficient Simulation Metamodeling in High Dimensions  Xiaowei Zhang  CUHKShenzhen, ZOOM 
2020.12.05 12:00 – 14:00  Lunch  Pandora, Student Center,CUHKShenzhen  
2020.12.05 14:00 – 14:45  Stability Analysis of NewtonMR under Hessian Perturbations  Yang Liu  CUHKShenzhen, ZOOM 
2020.12.05 14:45 – 15:30  NonConvex Orthogonal Group Synchronization via the Generalized Power Method  Anthony ManCho So  CUHKShenzhen, ZOOM 
2020.12.05 15:30 – 16:00  Coffee Break and Group Photo  CUHKShenzhen, ZOOM  
2019.12.05 16:00 – 16:45  QuasiStationary Monte Carlo Methods via Stochastic Approximation  Andi Wang  CUHKShenzhen, ZOOM 
2020.12.06 09:15 – 10:00  Functional Inequalities of Infinite Swapping Algorithm: Theory and Application  Wenpin Tang  CUHKShenzhen, ZOOM 
2020.12.06 10:00 – 10:45  Reproducing Stein Kernel Approach for PostHoc Correction of Approximate Sampling Algorithms  Fred Roosta  CUHKShenzhen, ZOOM 
2020.12.06 10:45 – 18:00  Individual Meetings  CUHKShenzhen, ZOOM 