ACTOR-CRITIC METHOD FOR HIGH DIMENSIONAL STATIC HAMILTON-JACOBI-BELLMAN PARTIAL DIFFERENTIAL EQUATIONS BASED ON NEURAL NETWORKS
We propose a novel numerical method for high dimensional Hamilton-Jacobi-Bellman (HJB) type elliptic partial differential equations (PDEs). The HJB PDEs, reformulated as optimal control problems, are tackled by the actor-critic framework inspired by reinforcement learning, based on neural network parametrization of the value and control functions. Within the actor-critic framework, we employ a policy gradient approach to improve the control, while for the value function, we derive a variance reduced least-squares temporal difference method using stochastic calculus. To numerically discretize the stochastic control problem, we employ an adaptive step size scheme to improve the accuracy near the domain boundary. Numerical examples up to 20 spatial dimensions including the linear quadratic regulators, the stochastic Van der Pol oscillators, the diffusive Eikonal equations, and fully nonlinear elliptic PDEs derived from a regulator problem are presented to validate the effectiveness of our proposed method.
Duke Scholars
Altmetric Attention Stats
Dimensions Citation Stats
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Numerical & Computational Mathematics
- 4903 Numerical and computational mathematics
- 4901 Applied mathematics
- 0802 Computation Theory and Mathematics
- 0103 Numerical and Computational Mathematics
- 0102 Applied Mathematics
Citation
Published In
DOI
EISSN
ISSN
Publication Date
Volume
Issue
Start / End Page
Related Subject Headings
- Numerical & Computational Mathematics
- 4903 Numerical and computational mathematics
- 4901 Applied mathematics
- 0802 Computation Theory and Mathematics
- 0103 Numerical and Computational Mathematics
- 0102 Applied Mathematics