Main Menu
— Event —

A Tutorial on Thompson Sampling

  • 2018.07.20
  • Event
Dear all, You are cordially invited to a tutorial on Thompson Sampling held by visiting professor Benjamin Van Roy of Institute for Data and Decision Analytics, CUHK-Shenzhen. Thompson Sampling tutorial sessions

Thompson Sampling tutorial sessions

 

Speaker: Prof. Benjamin Van Roy

Date: 10 am-12 pm on July 19-July 20, 2018

Venue: Room 110, Zhi Xin Building

  • Abstract

Thompson sampling is an algorithm for online decision problems where actions are taken sequentially in a manner that must balance between exploiting what is known to maximize immediate performance and investing to accumulate new information that may improve future performance. The algorithm addresses a broad range of problems in a computationally efficient manner and is therefore enjoying wide use. This tutorial covers the algorithm and its application, illustrating concepts through a range of examples, including Bernoulli bandit problems, shortest path problems, product recommendation, assortment, active learning with neural networks, and reinforcement learning in Markov decision processes. Most of these problems involve complex information structures, where information revealed by taking an action informs beliefs about other actions. We will also discuss when and why Thompson sampling is or is not effective and relations to alternative algorithms.

For more information about the Thompson Sampling, participants are strongly recommended  to read this document prior to the tutorial sessions: https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf

 

  • Biography

 

 
   

Benjamin Van Roy is a Professor of Electrical Engineering, Management Science and Engineering, and, by courtesy, Computer Science, at Stanford University, where he has served on the faculty since 1998. His research focuses on understanding how an agent interacting with a poorly understood environment can learn over time to make effective decisions. He is interested in questions concerning what is possible or impossible as well as how to design efficient reinforcement learning algorithms.

He has served on the editorial boards of Machine Learning, Mathematics of Operations Research, for which he co-edits the Learning Theory Area, Operations Research, for which he edited the Financial Engineering Area, and the INFORMS Journal on Optimization. He has also led research programs at and/or founded several technology companies, including Unica (acquired by IBM), Enuvis (acquired by SiRF), and Morgan Stanley.

 

He received the SB in Computer Science and Engineering and the SM and PhD in Electrical Engineering and Computer Science, all from MIT. He has been a recipient of the MIT George C. Newton Undergraduate Laboratory Project Award, the MIT Morris J. Levin Memorial Master's Thesis Award, the MIT George M. Sprowls Doctoral Dissertation Award, the National Science Foundation CAREER Award, the Stanford Tau Beta Pi Award for Excellence in Undergraduate Teaching, and the Management Science and Engineering Department's Graduate Teaching Award. He is an INFORMS Fellow and has been a Frederick E. Terman Fellow and a David Morgenthaler II Faculty Scholar. He has held visiting positions as the Wolfgang and Helga Gaul Visiting Professor at the University of Karlsruhe and as the Chin Sophonpanich Foundation Professor and the InTouch Professor at Chulalongkorn University.