Main Menu
— Event —

【Academic Seminar】Learning Generalized Strong Branching for Set Covering, Set Packing, and 0-1 Knapsack Problems - Mr. Yu Yang

  • 2019.12.13
  • Event
Learning Generalized Strong Branching for Set Covering, Set Packing, and 0-1 Knapsack Problems

Topic: Learning Generalized Strong Branching for Set Covering, Set Packing, and 0-1 Knapsack Problems

Speaker: Mr. Yu Yang, Georgia Institute of Technology

Time and Date: 15:30 - 16:30, Friday, December 13, 2019

Venue: Room 110, Zhi Xin Building

 

Abstract:

Set branching, a natural generalization of standard branching, branches on a set of variables, which potentially gives tighter local bounds and consequently yields a smaller search tree. However, selecting a good set to branch on in this case becomes a even more complicated problem than choosing a good single branching variable. Generalized strong branching on sets up to size k (GSB-k) considers sets of size no larger than k and chooses the branching set in a strong branching fashion. It’s prohibitively time-consuming due to the overhead incurred in solving a large number of linear programming (LP) relaxations. We apply machine learning techniques to learn GSB-2 and demonstrate that the learned branching scheme LRN-GSB significantly outperforms the default branching scheme (CPLEX-D) employed in the state-of-the-art commercial solver CPLEX on set covering, set packing, and 0-1 knapsack problems. Moreover, LRN-GSB is robust in the sense that it is able to consistently beat CPLEX-D on large (hard) instances if trained only on small (easy) instances.

 

Biography:

Yu Yang is a Ph.D. candidate in the H. Milton Stewart School of Industrial and Systems Engineering (ISyE) at Georgia Tech, under the supervision of Martin Savelsbergh and Natashia Boland. He received his B.S. in Applied Mathematics from Peking University in China. His interests lie broadly in discrete and continuous optimization. His current focus is on efficiently solving non-convex optimization models both theoretically and practically. He has been working on the following three research streams: 1) efficient branching strategies for classical integer programs (IP’s); 2) randomized accelerated first-order methods for non-convex optimization in statistical learning; 3) applications involving large-scale optimization models with the potential to impact the real world.