Main Menu
— Event —

【Academic Seminar】 A Push-Pull Gradient Method for Distributed Optimization in Networks

  • 2019.3.19
  • Event
A Push-Pull Gradient Method for Distributed Optimization in Networks

Topic:A Push-Pull Gradient Method for Distributed Optimization in Networks

Speaker:Shi Pu, Boston University

Date and Time:2:00 pm - 3:00 pm, March 19, 2019

Venue: Boardroom, Dao Yuan Building

 

Abstract:

 

We focus on solving a distributed convex optimization problem in a network, where each agent has its own convex cost function and the goal is to minimize the sum of the agents‘ cost functions while obeying the network connectivity structure. In order to minimize the sum of the cost functions, we consider a new distributed gradient-based method where each node maintains two estimates, namely, an estimate of the optimal decision variable and an estimate of the gradient for the average of the agents’ objective functions. From the viewpoint of an agent, the information about the gradients is pushed to the neighbors, while the information about the decision variable is pulled from the neighbors hence giving the name “push-pull gradient methods”. The method unifies the algorithms with different types of distributed architecture, including decentralized (peer-to-peer), centralized (master-slave), and semi-centralized (leader-follower) architecture. We show that the algorithm converges linearly for strongly convex and smooth objective functions over a directed static network. In our numerical test, the algorithm performs well even for time-varying directed networks.

 

Biography:

Shi Pu is a postdoctoral associate in the Division of Systems Engineering at Boston University. He received a B.S. Degree in Engineering Mechanics from Peking University, in 2012, and a Ph.D. Degree in Systems Engineering from the University of Virginia, in 2016. He was a postdoctoral associate at the University of Florida from 2016 to 2017, and a postdoctoral scholar at Arizona State University from 2017 to 2018. His research interests include distributed optimization, large-scale data analytics, network science and machine learning.