Optimistic Initialization for Exploration in Continuous Control

Conference Paper

Optimistic initialization underpins many theoretically sound exploration schemes in tabular domains; however, in the deep function approximation setting, optimism can quickly disappear if initialized naïvely. We propose a framework for more effectively incorporating optimistic initialization into reinforcement learning for continuous control. Our approach uses metric information about the state-action space to estimate which transitions are still unexplored, and explicitly maintains the initial Q-value optimism for the corresponding state-action pairs. We also develop methods for efficiently approximating these training objectives, and for incorporating domain knowledge into the optimistic envelope to improve sample efficiency. We empirically evaluate these approaches on a variety of hard exploration problems in continuous control, where our method outperforms existing exploration techniques.

Duke Authors

Cited Authors

  • Lobel, S; Gottesman, O; Allen, C; Bagaria, A; Konidaris, G

Published Date

  • June 30, 2022

Published In

  • Proceedings of the 36th Aaai Conference on Artificial Intelligence, Aaai 2022

Volume / Issue

  • 36 /

Start / End Page

  • 7612 - 7619

International Standard Book Number 10 (ISBN-10)

  • 1577358767

International Standard Book Number 13 (ISBN-13)

  • 9781577358763

Citation Source

  • Scopus