Revisiting Training Strategies and Generalization Performance in Deep Metric Learning

Deep Metric Learning (DML) is arguably one of the most influential lines of research for learning visual similarities with many proposed approaches every year. Although the field benefits from the rapid progress, the divergence in training protocols, architectures, and parameter choices make an unbi...

Full description

Bibliographic Details
Main Authors: Roth, Karsten, Milbich, Timo, Sinha, Samarth, Gupta, Prateek, Ommer, Björn, Cohen, Joseph Paul
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2020
Subjects:
DML
Online Access:https://dx.doi.org/10.48550/arxiv.2002.08473
https://arxiv.org/abs/2002.08473
id ftdatacite:10.48550/arxiv.2002.08473
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2002.08473 2023-05-15T16:01:12+02:00 Revisiting Training Strategies and Generalization Performance in Deep Metric Learning Roth, Karsten Milbich, Timo Sinha, Samarth Gupta, Prateek Ommer, Björn Cohen, Joseph Paul 2020 https://dx.doi.org/10.48550/arxiv.2002.08473 https://arxiv.org/abs/2002.08473 unknown arXiv arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Computer Vision and Pattern Recognition cs.CV FOS Computer and information sciences Article CreativeWork article Preprint 2020 ftdatacite https://doi.org/10.48550/arxiv.2002.08473 2022-03-10T16:04:11Z Deep Metric Learning (DML) is arguably one of the most influential lines of research for learning visual similarities with many proposed approaches every year. Although the field benefits from the rapid progress, the divergence in training protocols, architectures, and parameter choices make an unbiased comparison difficult. To provide a consistent reference point, we revisit the most widely used DML objective functions and conduct a study of the crucial parameter choices as well as the commonly neglected mini-batch sampling process. Under consistent comparison, DML objectives show much higher saturation than indicated by literature. Further based on our analysis, we uncover a correlation between the embedding space density and compression to the generalization performance of DML models. Exploiting these insights, we propose a simple, yet effective, training regularization to reliably boost the performance of ranking-based DML models on various standard benchmark datasets. Code and a publicly accessible WandB-repo are available at https://github.com/Confusezius/Revisiting_Deep_Metric_Learning_PyTorch. : ICML 2020. Main paper 8.25 pages, 26 pages total Article in Journal/Newspaper DML DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Computer Vision and Pattern Recognition cs.CV
FOS Computer and information sciences
spellingShingle Computer Vision and Pattern Recognition cs.CV
FOS Computer and information sciences
Roth, Karsten
Milbich, Timo
Sinha, Samarth
Gupta, Prateek
Ommer, Björn
Cohen, Joseph Paul
Revisiting Training Strategies and Generalization Performance in Deep Metric Learning
topic_facet Computer Vision and Pattern Recognition cs.CV
FOS Computer and information sciences
description Deep Metric Learning (DML) is arguably one of the most influential lines of research for learning visual similarities with many proposed approaches every year. Although the field benefits from the rapid progress, the divergence in training protocols, architectures, and parameter choices make an unbiased comparison difficult. To provide a consistent reference point, we revisit the most widely used DML objective functions and conduct a study of the crucial parameter choices as well as the commonly neglected mini-batch sampling process. Under consistent comparison, DML objectives show much higher saturation than indicated by literature. Further based on our analysis, we uncover a correlation between the embedding space density and compression to the generalization performance of DML models. Exploiting these insights, we propose a simple, yet effective, training regularization to reliably boost the performance of ranking-based DML models on various standard benchmark datasets. Code and a publicly accessible WandB-repo are available at https://github.com/Confusezius/Revisiting_Deep_Metric_Learning_PyTorch. : ICML 2020. Main paper 8.25 pages, 26 pages total
format Article in Journal/Newspaper
author Roth, Karsten
Milbich, Timo
Sinha, Samarth
Gupta, Prateek
Ommer, Björn
Cohen, Joseph Paul
author_facet Roth, Karsten
Milbich, Timo
Sinha, Samarth
Gupta, Prateek
Ommer, Björn
Cohen, Joseph Paul
author_sort Roth, Karsten
title Revisiting Training Strategies and Generalization Performance in Deep Metric Learning
title_short Revisiting Training Strategies and Generalization Performance in Deep Metric Learning
title_full Revisiting Training Strategies and Generalization Performance in Deep Metric Learning
title_fullStr Revisiting Training Strategies and Generalization Performance in Deep Metric Learning
title_full_unstemmed Revisiting Training Strategies and Generalization Performance in Deep Metric Learning
title_sort revisiting training strategies and generalization performance in deep metric learning
publisher arXiv
publishDate 2020
url https://dx.doi.org/10.48550/arxiv.2002.08473
https://arxiv.org/abs/2002.08473
genre DML
genre_facet DML
op_rights arXiv.org perpetual, non-exclusive license
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
op_doi https://doi.org/10.48550/arxiv.2002.08473
_version_ 1766397164126208000