Rethinking Deep Contrastive Learning with Embedding Memory

Pair-wise loss functions have been extensively studied and shown to continuously improve the performance of deep metric learning (DML). However, they are primarily designed with intuition based on simple toy examples, and experimentally identifying the truly effective design is difficult in complica...

Full description

Bibliographic Details
Main Authors: Zhang, Haozhi, Wang, Xun, Huang, Weilin, Scott, Matthew R.
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2021
Subjects:
DML
Online Access:https://dx.doi.org/10.48550/arxiv.2103.14003
https://arxiv.org/abs/2103.14003
id ftdatacite:10.48550/arxiv.2103.14003
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2103.14003 2023-05-15T16:01:32+02:00 Rethinking Deep Contrastive Learning with Embedding Memory Zhang, Haozhi Wang, Xun Huang, Weilin Scott, Matthew R. 2021 https://dx.doi.org/10.48550/arxiv.2103.14003 https://arxiv.org/abs/2103.14003 unknown arXiv arXiv.org perpetual, non-exclusive license http://arxiv.org/licenses/nonexclusive-distrib/1.0/ Computer Vision and Pattern Recognition cs.CV FOS Computer and information sciences Article CreativeWork article Preprint 2021 ftdatacite https://doi.org/10.48550/arxiv.2103.14003 2022-03-10T14:47:09Z Pair-wise loss functions have been extensively studied and shown to continuously improve the performance of deep metric learning (DML). However, they are primarily designed with intuition based on simple toy examples, and experimentally identifying the truly effective design is difficult in complicated, real-world cases. In this paper, we provide a new methodology for systematically studying weighting strategies of various pair-wise loss functions, and rethink pair weighting with an embedding memory. We delve into the weighting mechanisms by decomposing the pair-wise functions, and study positive and negative weights separately using direct weight assignment. This allows us to study various weighting functions deeply and systematically via weight curves, and identify a number of meaningful, comprehensive and insightful facts, which come up with our key observation on memory-based DML: it is critical to mine hard negatives and discard easy negatives which are less informative and redundant, but weighting on positive pairs is not helpful. This results in an efficient but surprisingly simple rule to design the weighting scheme, making it significantly different from existing mini-batch based methods which design various sophisticated loss functions to weight pairs carefully. Finally, we conduct extensive experiments on three large-scale visual retrieval benchmarks, and demonstrate the superiority of memory-based DML over recent mini-batch based approaches, by using a simple contrastive loss with momentum-updated memory. : Under review Article in Journal/Newspaper DML DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Computer Vision and Pattern Recognition cs.CV
FOS Computer and information sciences
spellingShingle Computer Vision and Pattern Recognition cs.CV
FOS Computer and information sciences
Zhang, Haozhi
Wang, Xun
Huang, Weilin
Scott, Matthew R.
Rethinking Deep Contrastive Learning with Embedding Memory
topic_facet Computer Vision and Pattern Recognition cs.CV
FOS Computer and information sciences
description Pair-wise loss functions have been extensively studied and shown to continuously improve the performance of deep metric learning (DML). However, they are primarily designed with intuition based on simple toy examples, and experimentally identifying the truly effective design is difficult in complicated, real-world cases. In this paper, we provide a new methodology for systematically studying weighting strategies of various pair-wise loss functions, and rethink pair weighting with an embedding memory. We delve into the weighting mechanisms by decomposing the pair-wise functions, and study positive and negative weights separately using direct weight assignment. This allows us to study various weighting functions deeply and systematically via weight curves, and identify a number of meaningful, comprehensive and insightful facts, which come up with our key observation on memory-based DML: it is critical to mine hard negatives and discard easy negatives which are less informative and redundant, but weighting on positive pairs is not helpful. This results in an efficient but surprisingly simple rule to design the weighting scheme, making it significantly different from existing mini-batch based methods which design various sophisticated loss functions to weight pairs carefully. Finally, we conduct extensive experiments on three large-scale visual retrieval benchmarks, and demonstrate the superiority of memory-based DML over recent mini-batch based approaches, by using a simple contrastive loss with momentum-updated memory. : Under review
format Article in Journal/Newspaper
author Zhang, Haozhi
Wang, Xun
Huang, Weilin
Scott, Matthew R.
author_facet Zhang, Haozhi
Wang, Xun
Huang, Weilin
Scott, Matthew R.
author_sort Zhang, Haozhi
title Rethinking Deep Contrastive Learning with Embedding Memory
title_short Rethinking Deep Contrastive Learning with Embedding Memory
title_full Rethinking Deep Contrastive Learning with Embedding Memory
title_fullStr Rethinking Deep Contrastive Learning with Embedding Memory
title_full_unstemmed Rethinking Deep Contrastive Learning with Embedding Memory
title_sort rethinking deep contrastive learning with embedding memory
publisher arXiv
publishDate 2021
url https://dx.doi.org/10.48550/arxiv.2103.14003
https://arxiv.org/abs/2103.14003
genre DML
genre_facet DML
op_rights arXiv.org perpetual, non-exclusive license
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
op_doi https://doi.org/10.48550/arxiv.2103.14003
_version_ 1766397343397052416