Skip to main content

Efficient feature transformations for discriminative and generative continual learning

Publication ,  Conference
Verma, VK; Liang, KJ; Mehta, N; Rai, P; Carin, L
Published in: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
January 1, 2021

As neural networks are increasingly being applied to real-world applications, mechanisms to address distributional shift and sequential task learning without forgetting are critical. Methods incorporating network expansion have shown promise by naturally adding model capacity for learning new tasks while simultaneously avoiding catastrophic forgetting. However, the growth in the number of additional parameters of many of these types of methods can be computationally expensive at larger scales, at times prohibitively so. Instead, we propose a simple task-specific feature map transformation strategy for continual learning, which we call Efficient Feature Transformations (EFTs). These EFTs provide powerful flexibility for learning new tasks, achieved with minimal parameters added to the base architecture. We further propose a feature distance maximization strategy, which significantly improves task prediction in class incremental settings, without needing expensive generative models. We demonstrate the efficacy and efficiency of our method with an extensive set of experiments in discriminative (CIFAR-100 and ImageNet-1K) and generative (LSUN, CUB-200, Cats) sequences of tasks. Even with low single-digit parameter growth rates, EFTs can outperform many other continual learning methods in a wide range of settings.

Duke Scholars

Published In

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

DOI

ISSN

1063-6919

ISBN

9781665445092

Publication Date

January 1, 2021

Start / End Page

13860 / 13870
 

Citation

APA
Chicago
ICMJE
MLA
NLM
Verma, V. K., Liang, K. J., Mehta, N., Rai, P., & Carin, L. (2021). Efficient feature transformations for discriminative and generative continual learning. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 13860–13870). https://doi.org/10.1109/CVPR46437.2021.01365
Verma, V. K., K. J. Liang, N. Mehta, P. Rai, and L. Carin. “Efficient feature transformations for discriminative and generative continual learning.” In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 13860–70, 2021. https://doi.org/10.1109/CVPR46437.2021.01365.
Verma VK, Liang KJ, Mehta N, Rai P, Carin L. Efficient feature transformations for discriminative and generative continual learning. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2021. p. 13860–70.
Verma, V. K., et al. “Efficient feature transformations for discriminative and generative continual learning.” Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, 2021, pp. 13860–70. Scopus, doi:10.1109/CVPR46437.2021.01365.
Verma VK, Liang KJ, Mehta N, Rai P, Carin L. Efficient feature transformations for discriminative and generative continual learning. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition. 2021. p. 13860–13870.

Published In

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

DOI

ISSN

1063-6919

ISBN

9781665445092

Publication Date

January 1, 2021

Start / End Page

13860 / 13870