Technical Program

Paper Detail

Paper IDL.12.2
Paper Title An Improved Regret Bound for Thompson Sampling in the Gaussian Linear Bandit Setting
Authors Cem Kalkanli, Ayfer Ozgur, Stanford University, United States
Session L.12: Multi-Arm Bandits
Presentation Lecture
Track Statistics and Learning Theory
Manuscript  Click here to download the manuscript
Virtual Presentation  Click here to watch in the Virtual Symposium
Abstract Thompson sampling has been of significant recent interest due to its wide range of applicability to online learning problems and its good empirical and theoretical performance. In this paper, we analyze the performance of Thompson sampling in the canonical Gaussian linear bandit setting. We prove that the Bayesian regret of Thompson sampling in this setting is bounded by $O(\sqrt{T\log(T)})$ improving on an earlier bound of $O(\sqrt{T}\log(T))$ in the literature for the case of the infinite, and compact action set. Our proof relies on a Cauchy–Schwarz type inequality which can be of interest in its own right.

Plan Ahead


2021 IEEE International Symposium on Information Theory

11-16 July 2021 | Melbourne, Victoria, Australia

Visit Website!