An architecture for behavior-based reinforcement learning
This paper introduces an integration of reinforcement learning and behavior-based control designed to produce real-time learning in situated agents. The model layers a distributed and asynchronous reinforcement learning algorithm over a learned topological map and standard behavioral substrate to create a reinforcement learning complex. The topological map creates a small and task-relevant state space that aims to make learning feasible, while the distributed and asynchronous aspects of the architecture make it compatible with behavior-based design principles. We present the design, implementation and results of an experiment that requires a mobile robot to perform puck foraging in three artificial arenas using the new model, random decision making, and layered standard reinforcement learning. The results show that our model is able to learn rapidly on a real robot in a real environment, learning and adapting to change more quickly than both alternatives. We show that the robot is able to make the best choices it can given its drives and experiences using only local decisions and therefore displays planning behavior without the use of classical planning techniques. Copyright © 2005 International Society for Adaptive Behavior.
Duke Scholars
Altmetric Attention Stats
Dimensions Citation Stats
Published In
DOI
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 4611 Machine learning
- 4608 Human-centred computing
- 4602 Artificial intelligence
- 1702 Cognitive Sciences
- 1701 Psychology
- 0801 Artificial Intelligence and Image Processing
Citation
Published In
DOI
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Artificial Intelligence & Image Processing
- 4611 Machine learning
- 4608 Human-centred computing
- 4602 Artificial intelligence
- 1702 Cognitive Sciences
- 1701 Psychology
- 0801 Artificial Intelligence and Image Processing