Distilled Meta-learning for Multi-Class Incremental Learning

Meta-learning approaches have recently achieved promising performance in multi-class incremental learning. However, meta-learners still suffer from catastrophic forgetting, i.e., they tend to forget the learned knowledge from the old tasks when they focus on rapidly adapting to the new classes of th...

Full description

Bibliographic Details
Published in:ACM Transactions on Multimedia Computing, Communications, and Applications
Main Authors: Liu, Hao, Yan, Zhaoyu, Liu, Bing, Zhao, Jiaqi, Zhou, Yong, El Saddik, Abdulmotaleb
Other Authors: National Natural Science Foundation of China, Natural Science Foundation of Jiangsu Province
Format: Article in Journal/Newspaper
Language:English
Published: Association for Computing Machinery (ACM) 2023
Subjects:
DML
Online Access:http://dx.doi.org/10.1145/3576045
https://dl.acm.org/doi/pdf/10.1145/3576045
Description
Summary:Meta-learning approaches have recently achieved promising performance in multi-class incremental learning. However, meta-learners still suffer from catastrophic forgetting, i.e., they tend to forget the learned knowledge from the old tasks when they focus on rapidly adapting to the new classes of the current task. To solve this problem, we propose a novel distilled meta-learning (DML) framework for multi-class incremental learning that integrates seamlessly meta-learning with knowledge distillation in each incremental stage. Specifically, during inner-loop training, knowledge distillation is incorporated into the DML to overcome catastrophic forgetting. During outer-loop training, a meta-update rule is designed for the meta-learner to learn across tasks and quickly adapt to new tasks. By virtue of the bilevel optimization, our model is encouraged to reach a balance between the retention of old knowledge and the learning of new knowledge. Experimental results on four benchmark datasets demonstrate the effectiveness of our proposal and show that our method significantly outperforms other state-of-the-art incremental learning methods.