Stochastic kernel temporal difference for reinforcement learning

Published

Conference Paper

This paper introduces a kernel adaptive filter using the stochastic gradient on temporal differences, kernel TD(λ), to estimate the state-action value function Q in reinforcement learning. Kernel methods are powerful for solving nonlinear problems, but the growing computational complexity and memory size limit their applicability on practical scenarios. To overcome this, the quantization approach introduced in [1] is applied. To help understand the behavior and illustrate the role of the parameters, we apply the algorithm on a 2-dimentional spatial navigation task. Eligibility traces are commonly applied in TD learning to improve data efficiency, so the relations of eligibility trace λ and step size and filter size are observed. Moreover, kernel TD (0) is applied to neural decoding of an 8 target center-out reaching task performed by a monkey. Results show the method can effectively learn the brain-state action mapping for this task. © 2011 IEEE.

Full Text

Duke Authors

Cited Authors

  • Bae, J; Giraldo, LS; Chhatbar, P; Francis, J; Sanchez, J; Principe, J

Published Date

  • December 5, 2011

Published In

  • Ieee International Workshop on Machine Learning for Signal Processing

International Standard Book Number 13 (ISBN-13)

  • 9781457716232

Digital Object Identifier (DOI)

  • 10.1109/MLSP.2011.6064634

Citation Source

  • Scopus