Toward Sustainable Continual Learning: Detection and Knowledge Repurposing for Reoccurring Tasks
Most existing works on Continual Learning (CL) tend to assume exclusivity or dissimilarity among learning tasks, consequently these methods usually require constantly accumulating task-specific knowledge in memory for each task. This results in the eventual prohibitive expansion of the knowledge repository if we consider learning from a long sequence of tasks. In this work, we introduce a paradigm where the continual learner gets reoccurring tasks. We propose a framework that uses a task identity detection function that does not require additional learning, with which we analyze whether the current task is reoccurring to a specific task in the past. We can then reuse previous knowledge to slow down parameter expansion, ensuring that the system expands the knowledge repository sublinearly relative to the number of learned tasks. Our experiments show that the proposed framework performs competitively on widely used benchmarks such as CIFAR10, CIFAR100, EMNIST, and TinyImageNet, from which we create sequences of 10 to 100 tasks.