Skip to main content

Modeling and planning with macro-actions in decentralized POMDPs

Publication ,  Journal Article
Amato, C; Konidaris, G; Kaelbling, LP; How, JP
Published in: Journal of Artificial Intelligence Research
March 1, 2019

Decentralized partially observable Markov decision processes (Dec-POMDPs) are general models for decentralized multi-agent decision making under uncertainty. However, they typically model a problem at a low level of granularity, where each agent’s actions are primitive operations lasting exactly one time step. We address the case where each agent has macro-actions: temporally extended actions that may require different amounts of time to execute. We model macro-actions as options in a Dec-POMDP, focusing on actions that depend only on information directly available to the agent during execution. Therefore, we model systems where coordination decisions only occur at the level of deciding which macro-actions to execute. The core technical difficulty in this setting is that the options chosen by each agent no longer terminate at the same time. We extend three leading Dec-POMDP algorithms for policy generation to the macro-action case, and demonstrate their effectiveness in both standard benchmarks and a multi-robot coordination problem. The results show that our new algorithms retain agent coordination while allowing high-quality solutions to be generated for significantly longer horizons and larger state-spaces than previous Dec-POMDP methods. Furthermore, in the multi-robot domain, we show that, in contrast to most existing methods that are specialized to a particular problem class, our approach can synthesize control policies that exploit opportunities for coordination while balancing uncertainty, sensor information, and information about other agents.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

Journal of Artificial Intelligence Research

DOI

ISSN

1076-9757

Publication Date

March 1, 2019

Volume

64

Start / End Page

817 / 859

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 4611 Machine learning
  • 4603 Computer vision and multimedia computation
  • 4602 Artificial intelligence
  • 1702 Cognitive Sciences
  • 0801 Artificial Intelligence and Image Processing
  • 0102 Applied Mathematics
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Amato, C., Konidaris, G., Kaelbling, L. P., & How, J. P. (2019). Modeling and planning with macro-actions in decentralized POMDPs. Journal of Artificial Intelligence Research, 64, 817–859. https://doi.org/10.1613/jair.1.11418
Amato, C., G. Konidaris, L. P. Kaelbling, and J. P. How. “Modeling and planning with macro-actions in decentralized POMDPs.” Journal of Artificial Intelligence Research 64 (March 1, 2019): 817–59. https://doi.org/10.1613/jair.1.11418.
Amato C, Konidaris G, Kaelbling LP, How JP. Modeling and planning with macro-actions in decentralized POMDPs. Journal of Artificial Intelligence Research. 2019 Mar 1;64:817–59.
Amato, C., et al. “Modeling and planning with macro-actions in decentralized POMDPs.” Journal of Artificial Intelligence Research, vol. 64, Mar. 2019, pp. 817–59. Scopus, doi:10.1613/jair.1.11418.
Amato C, Konidaris G, Kaelbling LP, How JP. Modeling and planning with macro-actions in decentralized POMDPs. Journal of Artificial Intelligence Research. 2019 Mar 1;64:817–859.

Published In

Journal of Artificial Intelligence Research

DOI

ISSN

1076-9757

Publication Date

March 1, 2019

Volume

64

Start / End Page

817 / 859

Related Subject Headings

  • Artificial Intelligence & Image Processing
  • 4611 Machine learning
  • 4603 Computer vision and multimedia computation
  • 4602 Artificial intelligence
  • 1702 Cognitive Sciences
  • 0801 Artificial Intelligence and Image Processing
  • 0102 Applied Mathematics