Skip to main content
construction release_alert
Scholars@Duke will be undergoing maintenance April 11-15. Some features may be unavailable during this time.
cancel

Approximation algorithms for restless bandit problems

Publication ,  Journal Article
Guha, S; Munagala, K; Shi, P
Published in: Journal of the ACM
December 1, 2010

The restless bandit problem is one of the mostwell-studied generalizations of the celebrated stochastic multi-armed bandit (MAB) problem in decision theory. In its ultimate generality, the restless bandit problem is known to be PSPACE-Hard to approximate to any nontrivial factor, and little progress has been made on this problem despite its significance in modeling activity allocation under uncertainty. In this article, we consider the FEEDBACK MAB problem, where the reward obtained by playing each of n independent arms varies according to an underlying on/off Markov process whose exact state is only revealed when the arm is played. The goal is to design a policy for playing the arms in order to maximize the infinite horizon time average expected reward. This problem is also an instance of a Partially Observable Markov Decision Process (POMDP), and is widely studied in wireless scheduling and unmanned aerial vehicle (UAV) routing. Unlike the stochastic MAB problem, the FEEDBACK MAB problem does not admit to greedy index-based optimal policies. We develop a novel duality-based algorithmic technique that yields a surprisingly simple and intuitive (2 + ε)-approximate greedy policy to this problem. We show that both in terms of approximation factor and computational efficiency, our policy is closely related to the Whittle index, which is widely used for its simplicity and efficiency of computation. Subsequently we define a multi-state generalization, that we term MONOTONE bandits, which remains subclass of the restless bandit problem. We show that our policy remains a 2-approximation in this setting, and further, our technique is robust enough to incorporate various side-constraints such as blocking plays, switching costs, and ven models where determining the state of an arm is a separate operation from playing it. Our technique is also of independent interest for other restless bandit problems, and we provide an example in nonpreemptive machine replenishment. Interestingly, in this case, our policy provides a constant factor guarantee, whereas the Whittle index is provably polynomially worse. © 2010 ACM.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Journal of the ACM

DOI

EISSN

1557-735X

ISSN

0004-5411

Publication Date

December 1, 2010

Volume

58

Issue

1

Related Subject Headings

  • Computation Theory & Mathematics
  • 46 Information and computing sciences
  • 08 Information and Computing Sciences
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Guha, S., Munagala, K., & Shi, P. (2010). Approximation algorithms for restless bandit problems. Journal of the ACM, 58(1). https://doi.org/10.1145/1870103.1870106
Guha, S., K. Munagala, and P. Shi. “Approximation algorithms for restless bandit problems.” Journal of the ACM 58, no. 1 (December 1, 2010). https://doi.org/10.1145/1870103.1870106.
Guha S, Munagala K, Shi P. Approximation algorithms for restless bandit problems. Journal of the ACM. 2010 Dec 1;58(1).
Guha, S., et al. “Approximation algorithms for restless bandit problems.” Journal of the ACM, vol. 58, no. 1, Dec. 2010. Scopus, doi:10.1145/1870103.1870106.
Guha S, Munagala K, Shi P. Approximation algorithms for restless bandit problems. Journal of the ACM. 2010 Dec 1;58(1).

Published In

Journal of the ACM

DOI

EISSN

1557-735X

ISSN

0004-5411

Publication Date

December 1, 2010

Volume

58

Issue

1

Related Subject Headings

  • Computation Theory & Mathematics
  • 46 Information and computing sciences
  • 08 Information and Computing Sciences