Semi-Online Knowledge Distillation

Knowledge distillation is an effective and stable method for model compression via knowledge transfer. Conventional knowledge distillation (KD) is to transfer knowledge from a large and well pre-trained teacher network to a small student network, which is a one-way process. Recently, deep mutual lea...

Full description

Bibliographic Details
Main Authors: Liu, Zhiqiang, Liu, Yanxia, Huang, Chengkai
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2021
Subjects:
DML
Online Access:https://dx.doi.org/10.48550/arxiv.2111.11747
https://arxiv.org/abs/2111.11747
id ftdatacite:10.48550/arxiv.2111.11747
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2111.11747 2023-05-15T16:01:17+02:00 Semi-Online Knowledge Distillation Liu, Zhiqiang Liu, Yanxia Huang, Chengkai 2021 https://dx.doi.org/10.48550/arxiv.2111.11747 https://arxiv.org/abs/2111.11747 unknown arXiv arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Computer Vision and Pattern Recognition cs.CV FOS Computer and information sciences Article CreativeWork article Preprint 2021 ftdatacite https://doi.org/10.48550/arxiv.2111.11747 2022-03-10T13:31:51Z Knowledge distillation is an effective and stable method for model compression via knowledge transfer. Conventional knowledge distillation (KD) is to transfer knowledge from a large and well pre-trained teacher network to a small student network, which is a one-way process. Recently, deep mutual learning (DML) has been proposed to help student networks learn collaboratively and simultaneously. However, to the best of our knowledge, KD and DML have never been jointly explored in a unified framework to solve the knowledge distillation problem. In this paper, we investigate that the teacher model supports more trustworthy supervision signals in KD, while the student captures more similar behaviors from the teacher in DML. Based on these observations, we first propose to combine KD with DML in a unified framework. Furthermore, we propose a Semi-Online Knowledge Distillation (SOKD) method that effectively improves the performance of the student and the teacher. In this method, we introduce the peer-teaching training fashion in DML in order to alleviate the student's imitation difficulty, and also leverage the supervision signals provided by the well-trained teacher in KD. Besides, we also show our framework can be easily extended to feature-based distillation methods. Extensive experiments on CIFAR-100 and ImageNet datasets demonstrate the proposed method achieves state-of-the-art performance. : Accepted to BMVC2021 Article in Journal/Newspaper DML DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Computer Vision and Pattern Recognition cs.CV
FOS Computer and information sciences
spellingShingle Computer Vision and Pattern Recognition cs.CV
FOS Computer and information sciences
Liu, Zhiqiang
Liu, Yanxia
Huang, Chengkai
Semi-Online Knowledge Distillation
topic_facet Computer Vision and Pattern Recognition cs.CV
FOS Computer and information sciences
description Knowledge distillation is an effective and stable method for model compression via knowledge transfer. Conventional knowledge distillation (KD) is to transfer knowledge from a large and well pre-trained teacher network to a small student network, which is a one-way process. Recently, deep mutual learning (DML) has been proposed to help student networks learn collaboratively and simultaneously. However, to the best of our knowledge, KD and DML have never been jointly explored in a unified framework to solve the knowledge distillation problem. In this paper, we investigate that the teacher model supports more trustworthy supervision signals in KD, while the student captures more similar behaviors from the teacher in DML. Based on these observations, we first propose to combine KD with DML in a unified framework. Furthermore, we propose a Semi-Online Knowledge Distillation (SOKD) method that effectively improves the performance of the student and the teacher. In this method, we introduce the peer-teaching training fashion in DML in order to alleviate the student's imitation difficulty, and also leverage the supervision signals provided by the well-trained teacher in KD. Besides, we also show our framework can be easily extended to feature-based distillation methods. Extensive experiments on CIFAR-100 and ImageNet datasets demonstrate the proposed method achieves state-of-the-art performance. : Accepted to BMVC2021
format Article in Journal/Newspaper
author Liu, Zhiqiang
Liu, Yanxia
Huang, Chengkai
author_facet Liu, Zhiqiang
Liu, Yanxia
Huang, Chengkai
author_sort Liu, Zhiqiang
title Semi-Online Knowledge Distillation
title_short Semi-Online Knowledge Distillation
title_full Semi-Online Knowledge Distillation
title_fullStr Semi-Online Knowledge Distillation
title_full_unstemmed Semi-Online Knowledge Distillation
title_sort semi-online knowledge distillation
publisher arXiv
publishDate 2021
url https://dx.doi.org/10.48550/arxiv.2111.11747
https://arxiv.org/abs/2111.11747
genre DML
genre_facet DML
op_rights arXiv.org perpetual, non-exclusive license
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
op_doi https://doi.org/10.48550/arxiv.2111.11747
_version_ 1766397214818566144