Approximation algorithms for partial-information based stochastic control with Markovian rewards

Conference Paper

We consider a variant of the classic multi-armed bandit problem (MAB), which we call FEEDBACK MAB, where the reward obtained by playing each of n independent arms varies according to an underlying on/off Markov process with known parameters. The evolution of the Markov chain happens irrespective of whether the arm is played, and furthermore, the exact state of the Markov chain is only revealed to the player when the arm is played and the reward observed. At most one arm (or in general, M arms) can be played any time step. The goal is to design a policy for playing the arms in order to maximize the infinite horizon time average expected reward. This problem is an instance of a Partially Observable Markov Decision Process (POMDP), and a special case of the notoriously intractable "rest-less bandit" problem. Unlike the stochastic MAB problem, the FEEDBACK MAB problem does not admit to greedy index-based optimal policies. The state of the system at any time step encodes the beliefs about the states of different arms, and the policy decisions change these beliefs - this aspect complicates the design and analysis of simple algorithms. We design a constant factor approximation to the FEEDBACK MAB problem by solving and rounding a natural LP relaxation to this problem. As far as we are aware, this is the first approximation algorithm for a POMDP problem. © 2007 IEEE.

Full Text

Duke Authors

Cited Authors

  • Guha, S; Munagala, K

Published Date

  • December 1, 2007

Published In

Start / End Page

  • 483 - 493

International Standard Serial Number (ISSN)

  • 0272-5428

International Standard Book Number 10 (ISBN-10)

  • 0769530109

International Standard Book Number 13 (ISBN-13)

  • 9780769530109

Digital Object Identifier (DOI)

  • 10.1109/FOCS.2007.4389518

Citation Source

  • Scopus