Skip to main content

Robust Offline Reinforcement Learning with Linearly Structured f-Divergence Regularization

Publication ,  Conference
Tang, C; Liu, Z; Xu, P
Published in: Proceedings of Machine Learning Research
January 1, 2025

The Robust Regularized Markov Decision Process (RRMDP) is proposed to learn policies robust to dynamics shifts by adding regularization to the transition dynamics in the value function. Existing methods mostly use unstructured regularization, potentially leading to conservative policies under unrealistic transitions. To address this limitation, we propose a novel framework, the d-rectangular linear RRMDP (d-RRMDP), which introduces latent structures into both transition kernels and regularization. We focus on offline reinforcement learning, where an agent learns policies from a precollected dataset in the nominal environment. We develop the Robust Regularized Pessimistic Value Iteration (R2PVI) algorithm that employs linear function approximation for robust policy learning in d-RRMDPs with f-divergence based regularization terms on transition kernels. We provide instance-dependent upper bounds on the suboptimality gap of R2PVI policies, demonstrating that these bounds are influenced by how well the dataset covers state-action spaces visited by the optimal robust policy under robustly admissible transitions. We establish information-theoretic lower bounds to verify that our algorithm is near-optimal. Finally, numerical experiments validate that R2PVI learns robust policies and exhibits superior computational efficiency compared to baseline methods.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

267

Start / End Page

58842 / 58882
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Tang, C., Liu, Z., & Xu, P. (2025). Robust Offline Reinforcement Learning with Linearly Structured f-Divergence Regularization. In Proceedings of Machine Learning Research (Vol. 267, pp. 58842–58882).
Tang, C., Z. Liu, and P. Xu. “Robust Offline Reinforcement Learning with Linearly Structured f-Divergence Regularization.” In Proceedings of Machine Learning Research, 267:58842–82, 2025.
Tang C, Liu Z, Xu P. Robust Offline Reinforcement Learning with Linearly Structured f-Divergence Regularization. In: Proceedings of Machine Learning Research. 2025. p. 58842–82.
Tang, C., et al. “Robust Offline Reinforcement Learning with Linearly Structured f-Divergence Regularization.” Proceedings of Machine Learning Research, vol. 267, 2025, pp. 58842–82.
Tang C, Liu Z, Xu P. Robust Offline Reinforcement Learning with Linearly Structured f-Divergence Regularization. Proceedings of Machine Learning Research. 2025. p. 58842–58882.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

267

Start / End Page

58842 / 58882