Solving DEC-POMDPs by expectation maximization of value functions

Published

Conference Paper

Copyright © 2016 Association for the Advancement of Artifical Intelligence (www.aaai.org) All rights reserved. We present a new algorithm called PIEM to approximately solve for the policy of an infinite-horizon decentralized partially observable Markov decision process (DEC-POMDP). The algorithm uses expectation maximization (EM) only in the step of policy improvement, with policy evaluation achieved by solving the Bellman's equation in terms of finite state controllers (FSCs). This marks a key distinction of PIEM from the previous EM algorithm of (Kumar and Zilberstein, 2010), i.e., PIEM directly operates on a DEC-POMDP without transforming it into a mixture of dynamic Bayes nets. Thus, PIEM precisely maximizes the value function, avoiding complicated forward/backward message passing arid the corresponding computational and memory cost. To overcome local optima, we follow (Pa-jarinen and Peltonen, 2011) to solve the DEC-POMDP for a finite length horizon and use the resulting policy graph to initialize the FSCs. We solve the finite-horizon problem using a modified point-based policy generation (PBPG) algorithm, in which a closed-form solution is provided which was previously found by linear programming in the original PBPG. Experimental results on benchmark problems show that the proposed algorithms compare favorably to state-of-the-art methods.

Duke Authors

Cited Authors

  • Song, Z; Liao, X; Carin, L

Published Date

  • January 1, 2016

Published In

  • Aaai Spring Symposium Technical Report

Volume / Issue

  • SS-16-01 - 07 /

Start / End Page

  • 68 - 76

International Standard Book Number 13 (ISBN-13)

  • 9781577357544

Citation Source

  • Scopus