Skip to main content
construction release_alert
Scholars@Duke will be undergoing maintenance April 11-15. Some features may be unavailable during this time.
cancel

Double Explore-then-Commit: Asymptotic Optimality and Beyond

Publication ,  Conference
Jin, T; Xu, P; Xiao, X; Gu, Q
Published in: Proceedings of Thirty Fourth Conference on Learning Theory

We study the multi-armed bandit problem with subGaussian rewards. The explore-then-commit (ETC) strategy, which consists of an exploration phase followed by an exploitation phase, is one of the most widely used algorithms in a variety of online decision applications. Nevertheless, it has been shown in \cite{garivier2016explore} that ETC is suboptimal in the asymptotic sense as the horizon grows, and thus, is worse than fully sequential strategies such as Upper Confidence Bound (UCB). In this paper, we show that a variant of ETC algorithm can actually achieve the asymptotic optimality for multi-armed bandit problems as UCB-type algorithms do and extend it to the batched bandit setting. Specifically, we propose a double explore-then-commit (DETC) algorithm that has two exploration and exploitation phases and proves that DETC achieves the asymptotically optimal regret bound. To our knowledge, DETC is the first non-fully-sequential algorithm that achieves such asymptotic optimality. In addition, we extend DETC to batched bandit problems, where (i) the exploration process is split into a small number of batches and (ii) the round complexity is of central interest. We prove that a batched version of DETC can achieve the asymptotic optimality with only a constant round complexity. This is the first batched bandit algorithm that can attain the optimal asymptotic regret bound and optimal round complexity simultaneously.

Duke Scholars

Published In

Proceedings of Thirty Fourth Conference on Learning Theory

Publisher

PMLR

Conference Name

Conference on Learning Theory
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Jin, T., Xu, P., Xiao, X., & Gu, Q. (n.d.). Double Explore-then-Commit: Asymptotic Optimality and Beyond. In Proceedings of Thirty Fourth Conference on Learning Theory. PMLR.
Jin, Tianyuan, Pan Xu, Xiaokui Xiao, and Quanquan Gu. “Double Explore-then-Commit: Asymptotic Optimality and Beyond.” In Proceedings of Thirty Fourth Conference on Learning Theory. PMLR, n.d.
Jin T, Xu P, Xiao X, Gu Q. Double Explore-then-Commit: Asymptotic Optimality and Beyond. In: Proceedings of Thirty Fourth Conference on Learning Theory. PMLR;
Jin, Tianyuan, et al. “Double Explore-then-Commit: Asymptotic Optimality and Beyond.” Proceedings of Thirty Fourth Conference on Learning Theory, PMLR.
Jin T, Xu P, Xiao X, Gu Q. Double Explore-then-Commit: Asymptotic Optimality and Beyond. Proceedings of Thirty Fourth Conference on Learning Theory. PMLR;

Published In

Proceedings of Thirty Fourth Conference on Learning Theory

Publisher

PMLR

Conference Name

Conference on Learning Theory