Skip to main content

Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction

Publication ,  Conference
He, Y; Liu, Z; Wang, W; Xu, P
Published in: Proceedings of Machine Learning Research
January 1, 2025

Off-dynamics reinforcement learning (RL), where training and deployment transition dynamics are different, can be formulated as learning in a robust Markov decision process (RMDP) where uncertainties in transition dynamics are imposed. Existing literature mostly assumes access to generative models allowing arbitrary state-action queries or pre-collected datasets with a good state coverage of the deployment environment, bypassing the challenge of exploration. In this work, we study a more realistic and challenging setting where the agent is limited to online interaction with the training environment. To capture the intrinsic difficulty of exploration in online RMDPs, we introduce the supremal visitation ratio, a novel quantity that measures the mismatch between the training dynamics and the deployment dynamics. We show that if this ratio is unbounded, online learning becomes exponentially hard. We propose the first computationally efficient algorithm that achieves sublinear regret in online RMDPs with f-divergence based transition uncertainties. We also establish matching regret lower bounds, demonstrating that our algorithm achieves optimal dependence on both the supremal visitation ratio and the number of interaction episodes. Finally, we validate our theoretical results through comprehensive numerical experiments.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

267

Start / End Page

22595 / 22646
 

Citation

APA
Chicago
ICMJE
MLA
NLM
He, Y., Liu, Z., Wang, W., & Xu, P. (2025). Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction. In Proceedings of Machine Learning Research (Vol. 267, pp. 22595–22646).
He, Y., Z. Liu, W. Wang, and P. Xu. “Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction.” In Proceedings of Machine Learning Research, 267:22595–646, 2025.
He Y, Liu Z, Wang W, Xu P. Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction. In: Proceedings of Machine Learning Research. 2025. p. 22595–646.
He, Y., et al. “Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction.” Proceedings of Machine Learning Research, vol. 267, 2025, pp. 22595–646.
He Y, Liu Z, Wang W, Xu P. Sample Complexity of Distributionally Robust Off-Dynamics Reinforcement Learning with Online Interaction. Proceedings of Machine Learning Research. 2025. p. 22595–22646.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

267

Start / End Page

22595 / 22646