Unbiased Evaluation of Deep Metric Learning Algorithms
Deep metric learning (DML) is a popular approach for images retrieval, solving verification (same or not) problems and addressing open set classification. Arguably, the most common DML approach is with triplet loss, despite significant advances in the area of DML. Triplet loss suffers from several i...
Main Authors: | , , |
---|---|
Format: | Article in Journal/Newspaper |
Language: | unknown |
Published: |
arXiv
2019
|
Subjects: | |
Online Access: | https://dx.doi.org/10.48550/arxiv.1911.12528 https://arxiv.org/abs/1911.12528 |
id |
ftdatacite:10.48550/arxiv.1911.12528 |
---|---|
record_format |
openpolar |
spelling |
ftdatacite:10.48550/arxiv.1911.12528 2023-05-15T16:01:11+02:00 Unbiased Evaluation of Deep Metric Learning Algorithms Fehervari, Istvan Ravichandran, Avinash Appalaraju, Srikar 2019 https://dx.doi.org/10.48550/arxiv.1911.12528 https://arxiv.org/abs/1911.12528 unknown arXiv arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Machine Learning cs.LG Computer Vision and Pattern Recognition cs.CV Machine Learning stat.ML FOS Computer and information sciences Article CreativeWork article Preprint 2019 ftdatacite https://doi.org/10.48550/arxiv.1911.12528 2022-03-10T16:27:27Z Deep metric learning (DML) is a popular approach for images retrieval, solving verification (same or not) problems and addressing open set classification. Arguably, the most common DML approach is with triplet loss, despite significant advances in the area of DML. Triplet loss suffers from several issues such as collapse of the embeddings, high sensitivity to sampling schemes and more importantly a lack of performance when compared to more modern methods. We attribute this adoption to a lack of fair comparisons between various methods and the difficulty in adopting them for novel problem statements. In this paper, we perform an unbiased comparison of the most popular DML baseline methods under same conditions and more importantly, not obfuscating any hyper parameter tuning or adjustment needed to favor a particular method. We find, that under equal conditions several older methods perform significantly better than previously believed. In fact, our unified implementation of 12 recently introduced DML algorithms achieve state-of-the art performance on CUB200, CAR196, and Stanford Online products datasets which establishes a new set of baselines for future DML research. The codebase and all tuned hyperparameters will be open-sourced for reproducibility and to serve as a source of benchmark. Article in Journal/Newspaper DML DataCite Metadata Store (German National Library of Science and Technology) |
institution |
Open Polar |
collection |
DataCite Metadata Store (German National Library of Science and Technology) |
op_collection_id |
ftdatacite |
language |
unknown |
topic |
Machine Learning cs.LG Computer Vision and Pattern Recognition cs.CV Machine Learning stat.ML FOS Computer and information sciences |
spellingShingle |
Machine Learning cs.LG Computer Vision and Pattern Recognition cs.CV Machine Learning stat.ML FOS Computer and information sciences Fehervari, Istvan Ravichandran, Avinash Appalaraju, Srikar Unbiased Evaluation of Deep Metric Learning Algorithms |
topic_facet |
Machine Learning cs.LG Computer Vision and Pattern Recognition cs.CV Machine Learning stat.ML FOS Computer and information sciences |
description |
Deep metric learning (DML) is a popular approach for images retrieval, solving verification (same or not) problems and addressing open set classification. Arguably, the most common DML approach is with triplet loss, despite significant advances in the area of DML. Triplet loss suffers from several issues such as collapse of the embeddings, high sensitivity to sampling schemes and more importantly a lack of performance when compared to more modern methods. We attribute this adoption to a lack of fair comparisons between various methods and the difficulty in adopting them for novel problem statements. In this paper, we perform an unbiased comparison of the most popular DML baseline methods under same conditions and more importantly, not obfuscating any hyper parameter tuning or adjustment needed to favor a particular method. We find, that under equal conditions several older methods perform significantly better than previously believed. In fact, our unified implementation of 12 recently introduced DML algorithms achieve state-of-the art performance on CUB200, CAR196, and Stanford Online products datasets which establishes a new set of baselines for future DML research. The codebase and all tuned hyperparameters will be open-sourced for reproducibility and to serve as a source of benchmark. |
format |
Article in Journal/Newspaper |
author |
Fehervari, Istvan Ravichandran, Avinash Appalaraju, Srikar |
author_facet |
Fehervari, Istvan Ravichandran, Avinash Appalaraju, Srikar |
author_sort |
Fehervari, Istvan |
title |
Unbiased Evaluation of Deep Metric Learning Algorithms |
title_short |
Unbiased Evaluation of Deep Metric Learning Algorithms |
title_full |
Unbiased Evaluation of Deep Metric Learning Algorithms |
title_fullStr |
Unbiased Evaluation of Deep Metric Learning Algorithms |
title_full_unstemmed |
Unbiased Evaluation of Deep Metric Learning Algorithms |
title_sort |
unbiased evaluation of deep metric learning algorithms |
publisher |
arXiv |
publishDate |
2019 |
url |
https://dx.doi.org/10.48550/arxiv.1911.12528 https://arxiv.org/abs/1911.12528 |
genre |
DML |
genre_facet |
DML |
op_rights |
arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ |
op_doi |
https://doi.org/10.48550/arxiv.1911.12528 |
_version_ |
1766397152941047808 |