Skip to main content

Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy

Publication ,  Conference
Allen, C; Kirtland, A; Tao, RY; Lobel, S; Scott, D; Petrocelli, N; Gottesman, O; Parr, R; Littman, ML; Konidaris, G
Published in: Advances in Neural Information Processing Systems
January 1, 2024

Reinforcement learning algorithms typically rely on the assumption that the environment dynamics and value function can be expressed in terms of a Markovian state representation. However, when state information is only partially observable, how can an agent learn such a state representation, and how can it detect when it has found one? We introduce a metric that can accomplish both objectives, without requiring access to-or knowledge of-an underlying, unobservable state space. Our metric, the λ-discrepancy, is the difference between two distinct temporal difference (TD) value estimates, each computed using TD(λ) with a different value of λ. Since TD(λ=0) makes an implicit Markov assumption and TD(λ=1) does not, a discrepancy between these estimates is a potential indicator of a non-Markovian state representation. Indeed, we prove that the λ-discrepancy is exactly zero for all Markov decision processes and almost always non-zero for a broad class of partially observable environments. We also demonstrate empirically that, once detected, minimizing the λ-discrepancy can help with learning a memory function to mitigate the corresponding partial observability. We then train a reinforcement learning agent that simultaneously constructs two recurrent value networks with different λ parameters and minimizes the difference between them as an auxiliary loss. The approach scales to challenging partially observable domains, where the resulting agent frequently performs significantly better (and never performs worse) than a baseline recurrent agent with only a single value network.

Duke Scholars

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2024

Volume

37

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Allen, C., Kirtland, A., Tao, R. Y., Lobel, S., Scott, D., Petrocelli, N., … Konidaris, G. (2024). Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy. In Advances in Neural Information Processing Systems (Vol. 37).
Allen, C., A. Kirtland, R. Y. Tao, S. Lobel, D. Scott, N. Petrocelli, O. Gottesman, R. Parr, M. L. Littman, and G. Konidaris. “Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy.” In Advances in Neural Information Processing Systems, Vol. 37, 2024.
Allen C, Kirtland A, Tao RY, Lobel S, Scott D, Petrocelli N, et al. Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy. In: Advances in Neural Information Processing Systems. 2024.
Allen, C., et al. “Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy.” Advances in Neural Information Processing Systems, vol. 37, 2024.
Allen C, Kirtland A, Tao RY, Lobel S, Scott D, Petrocelli N, Gottesman O, Parr R, Littman ML, Konidaris G. Mitigating Partial Observability in Sequential Decision Processes via the Lambda Discrepancy. Advances in Neural Information Processing Systems. 2024.

Published In

Advances in Neural Information Processing Systems

ISSN

1049-5258

Publication Date

January 1, 2024

Volume

37

Related Subject Headings

  • 4611 Machine learning
  • 1702 Cognitive Sciences
  • 1701 Psychology