Online expectation maximization for reinforcement learning in POMDPs


Journal Article

We present online nested expectation maximization for model-free reinforcement learning in a POMDP. The algorithm evaluates the policy only in the current learning episode, discarding the episode after the evaluation and memorizing the sufficient statistic, from which the policy is computed in closedform. As a result, the online algorithm has a time complexity O (n) and a memory complexity O(1), compared to O (n2) and O(n) for the corresponding batch-mode algorithm, where n is the number of learning episodes. The online algorithm, which has a provable convergence, is demonstrated on five benchmark POMDP problems.

Duke Authors

Cited Authors

  • Liu, M; Liao, X; Carin, L

Published Date

  • December 1, 2013

Published In

Start / End Page

  • 1501 - 1507

International Standard Serial Number (ISSN)

  • 1045-0823

Citation Source

  • Scopus