Transfer in reinforcement learning via shared features
We present a framework for transfer in reinforcement learning based on the idea that related tasks share some common features, and that transfer can be achieved via those shared features. The framework attempts to capture the notion of tasks that are related but distinct, and provides some insight into when transfer can be usefully applied to a problem sequence and when it cannot. We apply the framework to the knowledge transfer problem, and show that an agent can learn a portable shaping function from experience in a sequence of tasks to significantly improve performance in a later related task, even given a very brief training period. We also apply the framework to skill transfer, to show that agents can learn portable skills across a sequence of tasks that significantly improve performance on later related tasks, approaching the performance of agents given perfectly learned problem-specific skills. © 2012 George Konidaris, Ilya Scheidwasser and Andrew Barto.
Duke Scholars
Published In
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 4905 Statistics
- 4611 Machine learning
- 17 Psychology and Cognitive Sciences
- 08 Information and Computing Sciences
Citation
Published In
EISSN
ISSN
Publication Date
Volume
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 4905 Statistics
- 4611 Machine learning
- 17 Psychology and Cognitive Sciences
- 08 Information and Computing Sciences