Skip to main content

In-Context Reinforcement Learning From Suboptimal Historical Data

Publication ,  Conference
Dong, J; Guo, M; Fang, EX; Yang, Z; Tarokh, V
Published in: Proceedings of Machine Learning Research
January 1, 2025

Transformer models have achieved remarkableempirical successes, largely due to their incontext learning capabilities. Inspired by this, we explore training an autoregressive transformer for in-context reinforcement learning (ICRL). In this setting, we initially train a transformer on an offline dataset consisting of trajectories collected from various RL tasks, and then fix and use this transformer to create an action policy for new RL tasks. Notably, we consider the setting where the offline dataset contains trajectories sampled from suboptimal behavioral policies. In this case, standard autoregressive training corresponds to imitation learning and results in suboptimal performance. To address this, we propose the Decision Importance Transformer (DIT) framework, which emulates the actor-critic algorithm in an in-context manner. In particular, we first train a transformer-based value function that estimates the advantage functions of the behavior policies that collected the suboptimal trajectories. Then we train a transformer-based policy via a weighted maximum likelihood estimation loss, where the weights are constructed based on the trained value function to steer the suboptimal policies to the optimal ones. We conduct extensive experiments to test the performance of DIT on both bandit and Markov Decision Process problems. Our results show that DIT achieves superior performance, particularly when the offline dataset contains suboptimal historical data.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

267

Start / End Page

14021 / 14039
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Dong, J., Guo, M., Fang, E. X., Yang, Z., & Tarokh, V. (2025). In-Context Reinforcement Learning From Suboptimal Historical Data. In Proceedings of Machine Learning Research (Vol. 267, pp. 14021–14039).
Dong, J., M. Guo, E. X. Fang, Z. Yang, and V. Tarokh. “In-Context Reinforcement Learning From Suboptimal Historical Data.” In Proceedings of Machine Learning Research, 267:14021–39, 2025.
Dong J, Guo M, Fang EX, Yang Z, Tarokh V. In-Context Reinforcement Learning From Suboptimal Historical Data. In: Proceedings of Machine Learning Research. 2025. p. 14021–39.
Dong, J., et al. “In-Context Reinforcement Learning From Suboptimal Historical Data.” Proceedings of Machine Learning Research, vol. 267, 2025, pp. 14021–39.
Dong J, Guo M, Fang EX, Yang Z, Tarokh V. In-Context Reinforcement Learning From Suboptimal Historical Data. Proceedings of Machine Learning Research. 2025. p. 14021–14039.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

267

Start / End Page

14021 / 14039