Skip to main content

Policy Caches with Successor Features

Publication ,  Conference
Nemecek, M; Parr, R
Published in: Proceedings of Machine Learning Research
January 1, 2021

Transfer in reinforcement learning is based on the idea that it is possible to use what is learned in one task to improve the learning process in another task. For transfer between tasks which share transition dynamics but differ in reward function, successor features have been shown to be a useful representation which allows for efficient computation of action-value functions for previously-learned policies in new tasks. These functions induce policies in the new tasks, so an agent may not need to learn a new policy for each new task it encounters, especially if it is allowed some amount of suboptimality in those tasks. We present new bounds for the performance of optimal policies in a new task, as well as an approach to use these bounds to decide, when presented with a new task, whether to use cached policies or learn a new policy.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2021

Volume

139

Start / End Page

8025 / 8033
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Nemecek, M., & Parr, R. (2021). Policy Caches with Successor Features. In Proceedings of Machine Learning Research (Vol. 139, pp. 8025–8033).
Nemecek, M., and R. Parr. “Policy Caches with Successor Features.” In Proceedings of Machine Learning Research, 139:8025–33, 2021.
Nemecek M, Parr R. Policy Caches with Successor Features. In: Proceedings of Machine Learning Research. 2021. p. 8025–33.
Nemecek, M., and R. Parr. “Policy Caches with Successor Features.” Proceedings of Machine Learning Research, vol. 139, 2021, pp. 8025–33.
Nemecek M, Parr R. Policy Caches with Successor Features. Proceedings of Machine Learning Research. 2021. p. 8025–8033.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2021

Volume

139

Start / End Page

8025 / 8033