Skip to main content

Approximation algorithms for restless bandit problems

Publication ,  Conference
Guha, S; Munagala, K; Shi, P
Published in: Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms
January 1, 2009

In this paper, we consider the restless bandit problem, which is one of the most well-studied generalizations of the celebrated stochastic multi-armed bandit problem in decision theory. In its ultimate generality, the restless bandit problem is known to be PSPACE-Hard to approximate to any non-trivial factor, and little progress has been made on this problem despite its significance in modeling activity allocation under uncertainty. We make progress on this problem by showing that for an interesting and general subclass that we term MONOTONE bandits, a surprisingly simple and intuitive greedy policy yields a factor 2 approximation. Such greedy policies are termed index policies, and are popular due to their simplicity and their optimality for the stochastic multi-armed bandit problem. The MONOTONE bandit problem strictly generalizes the stochastic multi-armed bandit problem, and naturally models multi-project scheduling where the state of a project becomes increasingly uncertain when the project is not scheduled. We develop several novel techniques in the design and analysis of the index policy. Our algorithm proceeds by introducing a novel "balance" constraint to the dual of a well-known LP relaxation to the restless bandit problem. This is followed by a structural characterization of the optimal solution by using both the exact primal as well as dual complementary slackness conditions. This yields an interpretation of the dual variables as potential functions from which we derive the index policy and the associated analysis. Copyright © by SIAM.

Duke Scholars

Published In

Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms

DOI

ISBN

9780898716801

Publication Date

January 1, 2009

Start / End Page

28 / 37
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Guha, S., Munagala, K., & Shi, P. (2009). Approximation algorithms for restless bandit problems. In Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms (pp. 28–37). https://doi.org/10.1137/1.9781611973068.4
Guha, S., K. Munagala, and P. Shi. “Approximation algorithms for restless bandit problems.” In Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms, 28–37, 2009. https://doi.org/10.1137/1.9781611973068.4.
Guha S, Munagala K, Shi P. Approximation algorithms for restless bandit problems. In: Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms. 2009. p. 28–37.
Guha, S., et al. “Approximation algorithms for restless bandit problems.” Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms, 2009, pp. 28–37. Scopus, doi:10.1137/1.9781611973068.4.
Guha S, Munagala K, Shi P. Approximation algorithms for restless bandit problems. Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms. 2009. p. 28–37.

Published In

Proceedings of the Annual ACM-SIAM Symposium on Discrete Algorithms

DOI

ISBN

9780898716801

Publication Date

January 1, 2009

Start / End Page

28 / 37