【暑期短课】Stochastic Approximation and Applications - Prof. Gang George Yin
主题: Stochastic Approximation and Applications
报告人: Prof. Gang George Yin, Wayne State University
时间: 9:30 am - 11:30 am, 14:00 pm - 16:00 pm, June 17, 2019
9:30 am - 11:30 am, June 18, 2019
地点: Room 208, Cheng Dao Building
摘要:
Stochastic approximation (SA) methods play a fundamental role in numerous applications such as reinforcement learning, neural networks, signal processing, optimization, and communications. In this tutorial, an introduction to SA approaches, techniques, and applications is given.
The course will cover the topics: Overview, Convergence, Weak Convergence, Rates of Convergence, Efficiency, Time-varying Parameters and Tracking.
简介:
George Yin received the B.S. degree in mathematics from the University of Delaware in 1983, the M.S. degree in Applied Mathematics in 1984, the M.S. degree in Electrical Engineering in 1987, and the Ph.D. degree in Applied Mathematics in 1987, from Brown University. He then joined the Department of Mathematics, Wayne State University, and became a full professor in 1996.
His research interests include stochastic systems theory, applied probability, stochastic processes, stochastic recursive algorithms, signal processing, and control and optimization. He has authored or coauthored over 190 refereed journal papers and 9 research books.
In 2002, George Yin was elected as Fellow of IEEE in 2002 for contributions to approximation, optimization, and control of stochastic systems. He is also Fellow of IFAC and a Fellow of SIAM.
详细摘要:
Part 1 - Overview: In this part, we discuss what stochastic approximation is and which kind of problems can be solved by using stochastic approximation methods. The Robbins-Monro and Kiefer-Wolfowitz algorithms will be introduced. An overview of the analysis of such algorithms is provided together with efficiency etc. Several application examples will be mentioned.
Part 2 - Convergence: In this part, we highlight the main idea of convergence of the recursively defined procedures by using the associated ordinary differential equations. The methods are now standard and termed ODE methods. We illustrate how the discrete iterates can be embedded in a continuous-time sequence. Then, we demonstrate how the asymptotic analysis can be carried out. The key is a combined use of analytic approach using systems and probabilistic argument.
Part 3 - Weak Convergence: Weak convergence is a substantial generalization of convergence in distribution. In this part, we give an introduction to weak convergence methods. Then we describe how weak convergence methods can be used in the analysis of stochastic approximation algorithms.
Part 4 - Rates of Convergence: An important part of analyzing a recursive algorithm is the study of the rate of convergence. Traditional rate of convergence study for stochastic approximation is carried out by showing a suitably scaled sequence of centered and scaled estimation errors to be asymptotically normally distributed. In lieu of looking at the discrete iteration directly, we show suitably scaled sequence converges weakly to a diffusion process. The scaling factor together with the asymptotic covariance gives us the convergence rate.
Part 5 - Efficiency Issues: We start by motivating the efficiency issue using an optimization approach. Then, we discuss the related so-called adaptive stochastic approximation problems. To further improve efficiency, we discuss several recent advances using averaging ideas.
Part 6 - Time-varying Parameters: This part is devoted to treating time-varying parameter problems. We present a couple of problems arising from discrete optimization and time-varying parameter tracking. We also discuss the associated networked systems that of current interest. One problem is tied up with the usual stochastic approximation with an additional randomly varying Markov chain. We will discuss how such problems can be handled and analyzed.