Distilled Meta-learning for Multi-Class Incremental Learning

Meta-learning approaches have recently achieved promising performance in multi-class incremental learning. However, meta-learners still suffer from catastrophic forgetting, i.e., they tend to forget the learned knowledge from the old tasks when they focus on rapidly adapting to the new classes of th...

Full description

Bibliographic Details
Published in:ACM Transactions on Multimedia Computing, Communications, and Applications
Main Authors: Liu, Hao, Yan, Zhaoyu, Liu, Bing, Zhao, Jiaqi, Zhou, Yong, El Saddik, Abdulmotaleb
Other Authors: National Natural Science Foundation of China, Natural Science Foundation of Jiangsu Province
Format: Article in Journal/Newspaper
Language:English
Published: Association for Computing Machinery (ACM) 2023
Subjects:
DML
Online Access:http://dx.doi.org/10.1145/3576045
https://dl.acm.org/doi/pdf/10.1145/3576045
id cracm:10.1145/3576045
record_format openpolar
spelling cracm:10.1145/3576045 2024-09-15T18:03:48+00:00 Distilled Meta-learning for Multi-Class Incremental Learning Liu, Hao Yan, Zhaoyu Liu, Bing Zhao, Jiaqi Zhou, Yong El Saddik, Abdulmotaleb National Natural Science Foundation of China Natural Science Foundation of Jiangsu Province 2023 http://dx.doi.org/10.1145/3576045 https://dl.acm.org/doi/pdf/10.1145/3576045 en eng Association for Computing Machinery (ACM) ACM Transactions on Multimedia Computing, Communications, and Applications volume 19, issue 4, page 1-16 ISSN 1551-6857 1551-6865 journal-article 2023 cracm https://doi.org/10.1145/3576045 2024-07-22T04:03:26Z Meta-learning approaches have recently achieved promising performance in multi-class incremental learning. However, meta-learners still suffer from catastrophic forgetting, i.e., they tend to forget the learned knowledge from the old tasks when they focus on rapidly adapting to the new classes of the current task. To solve this problem, we propose a novel distilled meta-learning (DML) framework for multi-class incremental learning that integrates seamlessly meta-learning with knowledge distillation in each incremental stage. Specifically, during inner-loop training, knowledge distillation is incorporated into the DML to overcome catastrophic forgetting. During outer-loop training, a meta-update rule is designed for the meta-learner to learn across tasks and quickly adapt to new tasks. By virtue of the bilevel optimization, our model is encouraged to reach a balance between the retention of old knowledge and the learning of new knowledge. Experimental results on four benchmark datasets demonstrate the effectiveness of our proposal and show that our method significantly outperforms other state-of-the-art incremental learning methods. Article in Journal/Newspaper DML ACM Publications (Association for Computing Machinery) ACM Transactions on Multimedia Computing, Communications, and Applications
institution Open Polar
collection ACM Publications (Association for Computing Machinery)
op_collection_id cracm
language English
description Meta-learning approaches have recently achieved promising performance in multi-class incremental learning. However, meta-learners still suffer from catastrophic forgetting, i.e., they tend to forget the learned knowledge from the old tasks when they focus on rapidly adapting to the new classes of the current task. To solve this problem, we propose a novel distilled meta-learning (DML) framework for multi-class incremental learning that integrates seamlessly meta-learning with knowledge distillation in each incremental stage. Specifically, during inner-loop training, knowledge distillation is incorporated into the DML to overcome catastrophic forgetting. During outer-loop training, a meta-update rule is designed for the meta-learner to learn across tasks and quickly adapt to new tasks. By virtue of the bilevel optimization, our model is encouraged to reach a balance between the retention of old knowledge and the learning of new knowledge. Experimental results on four benchmark datasets demonstrate the effectiveness of our proposal and show that our method significantly outperforms other state-of-the-art incremental learning methods.
author2 National Natural Science Foundation of China
Natural Science Foundation of Jiangsu Province
format Article in Journal/Newspaper
author Liu, Hao
Yan, Zhaoyu
Liu, Bing
Zhao, Jiaqi
Zhou, Yong
El Saddik, Abdulmotaleb
spellingShingle Liu, Hao
Yan, Zhaoyu
Liu, Bing
Zhao, Jiaqi
Zhou, Yong
El Saddik, Abdulmotaleb
Distilled Meta-learning for Multi-Class Incremental Learning
author_facet Liu, Hao
Yan, Zhaoyu
Liu, Bing
Zhao, Jiaqi
Zhou, Yong
El Saddik, Abdulmotaleb
author_sort Liu, Hao
title Distilled Meta-learning for Multi-Class Incremental Learning
title_short Distilled Meta-learning for Multi-Class Incremental Learning
title_full Distilled Meta-learning for Multi-Class Incremental Learning
title_fullStr Distilled Meta-learning for Multi-Class Incremental Learning
title_full_unstemmed Distilled Meta-learning for Multi-Class Incremental Learning
title_sort distilled meta-learning for multi-class incremental learning
publisher Association for Computing Machinery (ACM)
publishDate 2023
url http://dx.doi.org/10.1145/3576045
https://dl.acm.org/doi/pdf/10.1145/3576045
genre DML
genre_facet DML
op_source ACM Transactions on Multimedia Computing, Communications, and Applications
volume 19, issue 4, page 1-16
ISSN 1551-6857 1551-6865
op_doi https://doi.org/10.1145/3576045
container_title ACM Transactions on Multimedia Computing, Communications, and Applications
_version_ 1810441270956392448