Stick-breaking policy learning in Dec-POMDPs

Published

Conference Paper

Expectation maximization (EM) has recently been shown to be an efficient algorithm for learning finite-state controllers (FSCs) in large decentralized POMDPs (Dec-POMDPs). However, current methods use fixed-size FSCs and often converge to maxima that are far from the optimal value. This paper represents the local policy of each agent using variable-sized FSCs that are constructed using a stick-breaking prior, leading to a new framework called decentralized stick-breaking policy representation (Dec-SBPR). This approach learns the controller parameters with a variational Bayesian algorithm without having to assume that the Dec-POMDP model is available. The performance of Dec-SBPR is demonstrated on several benchmark problems, showing that the algorithm scales to large problems while outperforming other state-of-the-art methods.

Duke Authors

Cited Authors

  • Liu, M; Amato, C; Liao, X; Carin, L; How, JP

Published Date

  • January 1, 2015

Published In

Volume / Issue

  • 2015-January /

Start / End Page

  • 2011 - 2018

International Standard Serial Number (ISSN)

  • 1045-0823

International Standard Book Number 13 (ISBN-13)

  • 9781577357384

Citation Source

  • Scopus