Skip to main content

Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits

Publication ,  Conference
Jin, T; Xu, P; Xiao, X; Anandkumar, A
Published in: Advances in Neural Information Processing Systems
January 1, 2022

We study the regret of Thompson sampling (TS) algorithms for exponential family bandits, where the reward distribution is from a one-dimensional exponential family, which covers many common reward distributions including Bernoulli, Gaussian, Gamma, Exponential, etc. We propose a Thompson sampling algorithm, termed ExpTS, which uses a novel sampling distribution to avoid the under-estimation of the optimal arm. We provide a tight regret analysis for ExpTS, which simultaneously yields both the finite-time regret bound as well as the asymptotic regret bound. In particular, for a K-armed bandit with exponential family rewards, ExpTS over a horizon T is sub-UCB (a strong criterion for the finite-time regret that is problem-dependent), minimax optimal up to a factor √log K, and asymptotically optimal, for exponential family rewards. Moreover, we propose ExpTS+, by adding a greedy exploitation step in addition to the sampling distribution used in ExpTS, to avoid the over-estimation of sub-optimal arms. ExpTS+ is an anytime bandit algorithm and achieves the minimax optimality and asymptotic optimality simultaneously for exponential family reward distributions. Our proof techniques are general and conceptually simple and can be easily applied to analyze standard Thompson sampling with specific reward distributions.

Duke Scholars

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2022

Volume

35

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Jin, T., Xu, P., Xiao, X., & Anandkumar, A. (2022). Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits. In Advances in Neural Information Processing Systems (Vol. 35).
Jin, T., P. Xu, X. Xiao, and A. Anandkumar. “Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits.” In Advances in Neural Information Processing Systems, Vol. 35, 2022.
Jin T, Xu P, Xiao X, Anandkumar A. Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits. In: Advances in Neural Information Processing Systems. 2022.
Jin, T., et al. “Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits.” Advances in Neural Information Processing Systems, vol. 35, 2022.
Jin T, Xu P, Xiao X, Anandkumar A. Finite-Time Regret of Thompson Sampling Algorithms for Exponential Family Multi-Armed Bandits. Advances in Neural Information Processing Systems. 2022.

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2022

Volume

35

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology