Ensemble of Loss Functions to Improve Generalizability of Deep Metric Learning methods

Deep Metric Learning (DML) learns a non-linear semantic embedding from input data that brings similar pairs together while keeps dissimilar data away from each other. To this end, many different methods are proposed in the last decade with promising results in various applications. The success of a...

Full description

Bibliographic Details
Main Author: Zabihzadeh, Davood
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2021
Subjects:
DML
Online Access:https://dx.doi.org/10.48550/arxiv.2107.01130
https://arxiv.org/abs/2107.01130
id ftdatacite:10.48550/arxiv.2107.01130
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2107.01130 2023-05-15T16:01:36+02:00 Ensemble of Loss Functions to Improve Generalizability of Deep Metric Learning methods Zabihzadeh, Davood 2021 https://dx.doi.org/10.48550/arxiv.2107.01130 https://arxiv.org/abs/2107.01130 unknown arXiv arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Computer Vision and Pattern Recognition cs.CV Artificial Intelligence cs.AI Machine Learning cs.LG FOS Computer and information sciences I.2.6 68T07 primary, 68T05, 68T45 secondary Article CreativeWork article Preprint 2021 ftdatacite https://doi.org/10.48550/arxiv.2107.01130 2022-03-10T13:50:04Z Deep Metric Learning (DML) learns a non-linear semantic embedding from input data that brings similar pairs together while keeps dissimilar data away from each other. To this end, many different methods are proposed in the last decade with promising results in various applications. The success of a DML algorithm greatly depends on its loss function. However, no loss function is perfect, and it deals only with some aspects of an optimal similarity embedding. Besides, the generalizability of the DML on unseen categories during the test stage is an important matter that is not considered by existing loss functions. To address these challenges, we propose novel approaches to combine different losses built on top of a shared deep feature extractor. The proposed ensemble of losses enforces the deep model to extract features that are consistent with all losses. Since the selected losses are diverse and each emphasizes different aspects of an optimal semantic embedding, our effective combining methods yield a considerable improvement over any individual loss and generalize well on unseen categories. Here, there is no limitation in choosing loss functions, and our methods can work with any set of existing ones. Besides, they can optimize each loss function as well as its weight in an end-to-end paradigm with no need to adjust any hyper-parameter. We evaluate our methods on some popular datasets from the machine vision domain in conventional Zero-Shot-Learning (ZSL) settings. The results are very encouraging and show that our methods outperform all baseline losses by a large margin in all datasets. : 27 pages, 12 figures Article in Journal/Newspaper DML DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Computer Vision and Pattern Recognition cs.CV
Artificial Intelligence cs.AI
Machine Learning cs.LG
FOS Computer and information sciences
I.2.6
68T07 primary, 68T05, 68T45 secondary
spellingShingle Computer Vision and Pattern Recognition cs.CV
Artificial Intelligence cs.AI
Machine Learning cs.LG
FOS Computer and information sciences
I.2.6
68T07 primary, 68T05, 68T45 secondary
Zabihzadeh, Davood
Ensemble of Loss Functions to Improve Generalizability of Deep Metric Learning methods
topic_facet Computer Vision and Pattern Recognition cs.CV
Artificial Intelligence cs.AI
Machine Learning cs.LG
FOS Computer and information sciences
I.2.6
68T07 primary, 68T05, 68T45 secondary
description Deep Metric Learning (DML) learns a non-linear semantic embedding from input data that brings similar pairs together while keeps dissimilar data away from each other. To this end, many different methods are proposed in the last decade with promising results in various applications. The success of a DML algorithm greatly depends on its loss function. However, no loss function is perfect, and it deals only with some aspects of an optimal similarity embedding. Besides, the generalizability of the DML on unseen categories during the test stage is an important matter that is not considered by existing loss functions. To address these challenges, we propose novel approaches to combine different losses built on top of a shared deep feature extractor. The proposed ensemble of losses enforces the deep model to extract features that are consistent with all losses. Since the selected losses are diverse and each emphasizes different aspects of an optimal semantic embedding, our effective combining methods yield a considerable improvement over any individual loss and generalize well on unseen categories. Here, there is no limitation in choosing loss functions, and our methods can work with any set of existing ones. Besides, they can optimize each loss function as well as its weight in an end-to-end paradigm with no need to adjust any hyper-parameter. We evaluate our methods on some popular datasets from the machine vision domain in conventional Zero-Shot-Learning (ZSL) settings. The results are very encouraging and show that our methods outperform all baseline losses by a large margin in all datasets. : 27 pages, 12 figures
format Article in Journal/Newspaper
author Zabihzadeh, Davood
author_facet Zabihzadeh, Davood
author_sort Zabihzadeh, Davood
title Ensemble of Loss Functions to Improve Generalizability of Deep Metric Learning methods
title_short Ensemble of Loss Functions to Improve Generalizability of Deep Metric Learning methods
title_full Ensemble of Loss Functions to Improve Generalizability of Deep Metric Learning methods
title_fullStr Ensemble of Loss Functions to Improve Generalizability of Deep Metric Learning methods
title_full_unstemmed Ensemble of Loss Functions to Improve Generalizability of Deep Metric Learning methods
title_sort ensemble of loss functions to improve generalizability of deep metric learning methods
publisher arXiv
publishDate 2021
url https://dx.doi.org/10.48550/arxiv.2107.01130
https://arxiv.org/abs/2107.01130
genre DML
genre_facet DML
op_rights arXiv.org perpetual, non-exclusive license
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
op_doi https://doi.org/10.48550/arxiv.2107.01130
_version_ 1766397390955216896