DML-PL: deep metric learning based pseudo-labeling framework for class imbalanced semi-supervised learning

Traditional class imbalanced learning algorithms require training data to be labeled, whereas semi-supervised learning algorithms assume that the class distribution is balanced. However, class imbalance and insufficient labeled data problems often coexist in practical real-world applications. Curren...

Full description

Bibliographic Details
Published in:Information Sciences
Main Authors: Yan, Mi, Hui, Siu Cheung, Li, Ning
Other Authors: School of Computer Science and Engineering
Format: Article in Journal/Newspaper
Language:English
Published: 2023
Subjects:
DML
Online Access:https://hdl.handle.net/10356/170840
https://doi.org/10.1016/j.ins.2023.01.074
Description
Summary:Traditional class imbalanced learning algorithms require training data to be labeled, whereas semi-supervised learning algorithms assume that the class distribution is balanced. However, class imbalance and insufficient labeled data problems often coexist in practical real-world applications. Currently, most existing class-imbalanced semi-supervised learning methods tackle these two problems separately, resulting in the trained model biased towards majority classes that have more data samples. In this study, we propose a deep metric learning based pseudo-labeling (DML-PL) framework that tackles both problems simultaneously for class-imbalanced semi-supervised learning. The proposed DML-PL framework comprises three modules: Deep Metric Learning, Pseudo-Labeling and Network Fine-tuning. An iterative self-training strategy is used to train the model multiple times. For each time of training, Deep Metric Learning trains a deep metric network to learn compact feature representations of labeled and unlabeled data. Pseudo-Labeling then generates reliable pseudo-labels for unlabeled data through labeled data clustering with nearest neighbors selection. Finally, Network Fine-tuning fine-tunes the deep metric network to generate better pseudo-labels in the subsequent training. The training ends when all the unlabeled data are pseudo-labeled. The proposed framework achieved state-of-the-art performance on the long-tailed CIFAR-10, CIFAR-100, and ImageNet127 benchmark datasets compared with baseline models. This study is supported by National Natural Science Foundation of China (62273230) and China Scholarship Council (No.202006230225)