Skip to main content

Parabolic Continual Learning

Publication ,  Conference
Yang, H; Hasan, A; Tarokh, V
Published in: Proceedings of Machine Learning Research
January 1, 2025

Regularizing continual learning techniques is important for anticipating algorithmic behavior under new realizations of data. We introduce a new approach to continual learning by imposing the properties of a parabolic partial differential equation (PDE) to regularize the expected behavior of the loss over time. This class of parabolic PDEs has a number of favorable properties that allow us to analyze the error incurred through forgetting and the error induced through generalization. Specifically, we do this through imposing boundary conditions where the boundary is given by a memory buffer. By using the memory buffer as a boundary, we can enforce long term dependencies by bounding the expected error by the boundary loss. Finally, we illustrate the empirical performance of the method on a series of continual learning tasks.

Duke Scholars

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

258

Start / End Page

2620 / 2628
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Yang, H., Hasan, A., & Tarokh, V. (2025). Parabolic Continual Learning. In Proceedings of Machine Learning Research (Vol. 258, pp. 2620–2628).
Yang, H., A. Hasan, and V. Tarokh. “Parabolic Continual Learning.” In Proceedings of Machine Learning Research, 258:2620–28, 2025.
Yang H, Hasan A, Tarokh V. Parabolic Continual Learning. In: Proceedings of Machine Learning Research. 2025. p. 2620–8.
Yang, H., et al. “Parabolic Continual Learning.” Proceedings of Machine Learning Research, vol. 258, 2025, pp. 2620–28.
Yang H, Hasan A, Tarokh V. Parabolic Continual Learning. Proceedings of Machine Learning Research. 2025. p. 2620–2628.

Published In

Proceedings of Machine Learning Research

EISSN

2640-3498

Publication Date

January 1, 2025

Volume

258

Start / End Page

2620 / 2628