Skip to main content

Guided policy search for sequential multitask learning

Publication ,  Journal Article
Xiong, F; Sun, B; Yang, X; Qiao, H; Huang, K; Hussain, A; Liu, Z
Published in: IEEE Transactions on Systems, Man, and Cybernetics: Systems
January 1, 2019

Policy search in reinforcement learning (RL) is a practical approach to interact directly with environments in parameter spaces, that often deal with dilemmas of local optima and real-time sample collection. A promising algorithm, known as guided policy search (GPS), is capable of handling the challenge of training samples using trajectory-centric methods. It can also provide asymptotic local convergence guarantees. However, in its current form, the GPS algorithm cannot operate in sequential multitask learning scenarios. This is due to its batch-style training requirement, where all training samples are collectively provided at the start of the learning process. The algorithm's adaptation is thus hindered for real-time applications, where training samples or tasks can arrive randomly. In this paper, the GPS approach is reformulated, by adapting a recently proposed, lifelong-learning method, and elastic weight consolidation. Specifically, Fisher information is incorporated to impart knowledge from previously learned tasks. The proposed algorithm, termed sequential multitask learning-GPS, is able to operate in sequential multitask learning settings and ensuring continuous policy learning, without catastrophic forgetting. Pendulum and robotic manipulation experiments demonstrate the new algorithms efficacy to learn control policies for handling sequentially arriving training samples, delivering comparable performance to the traditional, and batch-based GPS algorithm. In conclusion, the proposed algorithm is posited as a new benchmark for the real-time RL and robotics research community.

Duke Scholars

Altmetric Attention Stats
Dimensions Citation Stats

Published In

IEEE Transactions on Systems, Man, and Cybernetics: Systems

DOI

EISSN

2168-2232

ISSN

2168-2216

Publication Date

January 1, 2019

Volume

49

Issue

1

Start / End Page

216 / 226
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Xiong, F., Sun, B., Yang, X., Qiao, H., Huang, K., Hussain, A., & Liu, Z. (2019). Guided policy search for sequential multitask learning. IEEE Transactions on Systems, Man, and Cybernetics: Systems, 49(1), 216–226. https://doi.org/10.1109/TSMC.2018.2800040
Xiong, F., B. Sun, X. Yang, H. Qiao, K. Huang, A. Hussain, and Z. Liu. “Guided policy search for sequential multitask learning.” IEEE Transactions on Systems, Man, and Cybernetics: Systems 49, no. 1 (January 1, 2019): 216–26. https://doi.org/10.1109/TSMC.2018.2800040.
Xiong F, Sun B, Yang X, Qiao H, Huang K, Hussain A, et al. Guided policy search for sequential multitask learning. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2019 Jan 1;49(1):216–26.
Xiong, F., et al. “Guided policy search for sequential multitask learning.” IEEE Transactions on Systems, Man, and Cybernetics: Systems, vol. 49, no. 1, Jan. 2019, pp. 216–26. Scopus, doi:10.1109/TSMC.2018.2800040.
Xiong F, Sun B, Yang X, Qiao H, Huang K, Hussain A, Liu Z. Guided policy search for sequential multitask learning. IEEE Transactions on Systems, Man, and Cybernetics: Systems. 2019 Jan 1;49(1):216–226.

Published In

IEEE Transactions on Systems, Man, and Cybernetics: Systems

DOI

EISSN

2168-2232

ISSN

2168-2216

Publication Date

January 1, 2019

Volume

49

Issue

1

Start / End Page

216 / 226