A Constrained Backpropagation Approach to Function Approximation and Approximate Dynamic Programming
The ability of preserving prior knowledge in an artificial neural network (ANN) while incrementally learning new information is important to many fields, including approximate dynamic programming (ADP), feedback control, and function approximation. Although ANNs exhibit excellent performance and generalization abilities when trained in batch mode, when they are trained incrementally with new data they tend to forget previous information due to a phenomenon known as interference. McCloskey and Cohen  were the first to suggest that a fundamental limitation of ANNsis that the process of learning a newset of patternsmaysuddenly and completely erase a network's knowledge of what it had already learned. This phenomenon, known as catastrophic interference or catastrophic forgetting, seriously limits the applicability of ANNs to adaptive feedback control, and incremental function approximation. Natural cognitive systems learn most tasks incrementally and need not relearn prior patterns to retain them in their long-term memory (LTM) during their lifetime. Catastrophic interference in ANNs is caused by their very ability to generalize using a single set of shared connection weights, and a set of interconnected nonlinear basis functions. Therefore, the modular and sparse architectures that have been proposed so far for suppressing interference also limit a neural network's ability to approximate and generalize highly nonlinear functions. This chapter describes how constrained backpropagation (CPROP) can be used to preserve prior knowledge while training ANNs incrementally through ADP and to solve differential equations, or approximate smooth nonlinear functions online. © 2013 The Institute of Electrical and Electronics Engineers, Inc.
Ferrari, S; Rudd, K; Di Muro, G
Start / End Page
Digital Object Identifier (DOI)