State Primitive Learning to Overcome Catastrophic Forgetting in Robotics
People can learn continuously a wide range of tasks without catastrophic forgetting. To mimic this functioning of continual learning, current methods mainly focus on studying a one-step supervised learning problem, e.g., image classification. They aim to retain the performance of previous image classification results when neural networks are sequentially trained on new images. In this paper, we concentrate on solving multi-step robotic tasks sequentially with the proposed architecture called state primitive learning. By projecting the original state space into a low-dimensional representation, meaningful state primitives can be generated to describe tasks. Under two kinds of different constraints on the generation of state primitives, control signals corresponding to different robotic tasks can be separately addressed only with an efficient linear regression. Experiments on several robotic manipulation tasks demonstrate the new method efficacy to learn control signals under the scenario of continual learning, delivering substantially improved performance over the other comparison methods.
Duke Scholars
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- 1702 Cognitive Sciences
- 1109 Neurosciences
- 0801 Artificial Intelligence and Image Processing
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- 1702 Cognitive Sciences
- 1109 Neurosciences
- 0801 Artificial Intelligence and Image Processing