Controlling Forgetting with Test-Time Data in Continual Learning

Bibliographic Details
Title: Controlling Forgetting with Test-Time Data in Continual Learning
Authors: Singh, Vaibhav, Aljundi, Rahaf, Belilovsky, Eugene
Publication Year: 2024
Collection: Computer Science
Subject Terms: Computer Science - Machine Learning
More Details: Foundational vision-language models have shown impressive performance on various downstream tasks. Yet, there is still a pressing need to update these models later as new tasks or domains become available. Ongoing Continual Learning (CL) research provides techniques to overcome catastrophic forgetting of previous information when new knowledge is acquired. To date, CL techniques focus only on the supervised training sessions. This results in significant forgetting yielding inferior performance to even the prior model zero shot performance. In this work, we argue that test-time data hold great information that can be leveraged in a self supervised manner to refresh the model's memory of previous learned tasks and hence greatly reduce forgetting at no extra labelling cost. We study how unsupervised data can be employed online to improve models' performance on prior tasks upon encountering representative samples. We propose a simple yet effective student-teacher model with gradient based sparse parameters updates and show significant performance improvements and reduction in forgetting, which could alleviate the role of an offline episodic memory/experience replay buffer.
Comment: 9 pages, 2 figures
Document Type: Working Paper
Access URL: http://arxiv.org/abs/2406.13653
Accession Number: edsarx.2406.13653
Database: arXiv
More Details
Description not available.