Skip to main content

Langevin Monte Carlo for Contextual Bandits

Publication ,  Conference
Xu, P; Zheng, H; Mazumdar, E; Azizzadenesheli, K; Anandkumar, A
Published in: Proceedings of Machine Learning Research
January 1, 2022

We study the efficiency of Thompson sampling for contextual bandits. Existing Thompson sampling-based algorithms need to construct a Laplace approximation (i.e., a Gaussian distribution) of the posterior distribution, which is inefficient to sample in high dimensional applications for general covariance matrices. Moreover, the Gaussian approximation may not be a good surrogate for the posterior distribution for general reward generating functions. We propose an efficient posterior sampling algorithm, viz., Langevin Monte Carlo Thompson Sampling (LMC-TS), that uses Markov Chain Monte Carlo (MCMC) methods to directly sample from the posterior distribution in contextual bandits. Our method is computationally efficient since it only needs to perform noisy gradient descent updates without constructing the Laplace approximation of the posterior distribution. We prove that the proposed algorithm achieves the same sublinear regret bound as the best Thompson sampling algorithms for a special case of contextual bandits, viz., linear contextual bandits. We conduct experiments on both synthetic data and real-world datasets on different contextual bandit models, which demonstrates that directly sampling from the posterior is both computationally efficient and competitive in performance.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2022

Volume

162

Start / End Page

24830 / 24850
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Xu, P., Zheng, H., Mazumdar, E., Azizzadenesheli, K., & Anandkumar, A. (2022). Langevin Monte Carlo for Contextual Bandits. In Proceedings of Machine Learning Research (Vol. 162, pp. 24830–24850).
Xu, P., H. Zheng, E. Mazumdar, K. Azizzadenesheli, and A. Anandkumar. “Langevin Monte Carlo for Contextual Bandits.” In Proceedings of Machine Learning Research, 162:24830–50, 2022.
Xu P, Zheng H, Mazumdar E, Azizzadenesheli K, Anandkumar A. Langevin Monte Carlo for Contextual Bandits. In: Proceedings of Machine Learning Research. 2022. p. 24830–50.
Xu, P., et al. “Langevin Monte Carlo for Contextual Bandits.” Proceedings of Machine Learning Research, vol. 162, 2022, pp. 24830–50.
Xu P, Zheng H, Mazumdar E, Azizzadenesheli K, Anandkumar A. Langevin Monte Carlo for Contextual Bandits. Proceedings of Machine Learning Research. 2022. p. 24830–24850.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2022

Volume

162

Start / End Page

24830 / 24850