Ranked List Loss for Deep Metric Learning

The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the m...

Full description

Bibliographic Details
Main Authors: Wang, Xinshao, Hua, Yang, Kodirov, Elyor, Hu, Guosheng, Garnier, Romain, Robertson, Neil M.
Format: Other Non-Article Part of Journal/Newspaper
Language:English
Published: 2020
Subjects:
DML
Online Access:https://pure.qub.ac.uk/en/publications/ranked-list-loss-for-deep-metric-learning(1efa7f64-9d44-4619-940b-6e3ac4d4abed).html
https://pureadmin.qub.ac.uk/ws/files/168412256/RankedNoise.pdf
id ftqueensubelpubl:oai:pure.qub.ac.uk/portal:publications/1efa7f64-9d44-4619-940b-6e3ac4d4abed
record_format openpolar
spelling ftqueensubelpubl:oai:pure.qub.ac.uk/portal:publications/1efa7f64-9d44-4619-940b-6e3ac4d4abed 2023-05-15T16:01:44+02:00 Ranked List Loss for Deep Metric Learning Wang, Xinshao Hua, Yang Kodirov, Elyor Hu, Guosheng Garnier, Romain Robertson, Neil M. 2020-01-09 application/pdf https://pure.qub.ac.uk/en/publications/ranked-list-loss-for-deep-metric-learning(1efa7f64-9d44-4619-940b-6e3ac4d4abed).html https://pureadmin.qub.ac.uk/ws/files/168412256/RankedNoise.pdf eng eng info:eu-repo/semantics/openAccess Wang , X , Hua , Y , Kodirov , E , Hu , G , Garnier , R & Robertson , N M 2020 , Ranked List Loss for Deep Metric Learning . in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2019): Proceedings . IEEE/CVF Conference on Computer Vision and Pattern Recognition: Proceedings . contributionToPeriodical 2020 ftqueensubelpubl 2022-02-09T22:30:31Z The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, rankingmotivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. They converge faster and achieve state-of-the-art performance. In this work, we present two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. Consequently, some useful examples are ignored and the structure is less informative. To address this, we propose to build a setbased similarity structure by exploiting all instances in the gallery. The samples are split into a positive and a negative set. Our objective is to make the query closer to the positive set than to the negative set by a margin. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution might be dropped. In contrast, we propose to learn a hypersphere for each class in order to preserve the similarity structure inside it. Our extensive experiments show that the proposed method achieves state-of-the-art performance on three widely used benchmarks. Other Non-Article Part of Journal/Newspaper DML Queen's University Belfast Research Portal The Gallery ENVELOPE(-86.417,-86.417,72.535,72.535) Triplets ENVELOPE(-59.750,-59.750,-62.383,-62.383)
institution Open Polar
collection Queen's University Belfast Research Portal
op_collection_id ftqueensubelpubl
language English
description The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, rankingmotivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. They converge faster and achieve state-of-the-art performance. In this work, we present two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. Consequently, some useful examples are ignored and the structure is less informative. To address this, we propose to build a setbased similarity structure by exploiting all instances in the gallery. The samples are split into a positive and a negative set. Our objective is to make the query closer to the positive set than to the negative set by a margin. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution might be dropped. In contrast, we propose to learn a hypersphere for each class in order to preserve the similarity structure inside it. Our extensive experiments show that the proposed method achieves state-of-the-art performance on three widely used benchmarks.
format Other Non-Article Part of Journal/Newspaper
author Wang, Xinshao
Hua, Yang
Kodirov, Elyor
Hu, Guosheng
Garnier, Romain
Robertson, Neil M.
spellingShingle Wang, Xinshao
Hua, Yang
Kodirov, Elyor
Hu, Guosheng
Garnier, Romain
Robertson, Neil M.
Ranked List Loss for Deep Metric Learning
author_facet Wang, Xinshao
Hua, Yang
Kodirov, Elyor
Hu, Guosheng
Garnier, Romain
Robertson, Neil M.
author_sort Wang, Xinshao
title Ranked List Loss for Deep Metric Learning
title_short Ranked List Loss for Deep Metric Learning
title_full Ranked List Loss for Deep Metric Learning
title_fullStr Ranked List Loss for Deep Metric Learning
title_full_unstemmed Ranked List Loss for Deep Metric Learning
title_sort ranked list loss for deep metric learning
publishDate 2020
url https://pure.qub.ac.uk/en/publications/ranked-list-loss-for-deep-metric-learning(1efa7f64-9d44-4619-940b-6e3ac4d4abed).html
https://pureadmin.qub.ac.uk/ws/files/168412256/RankedNoise.pdf
long_lat ENVELOPE(-86.417,-86.417,72.535,72.535)
ENVELOPE(-59.750,-59.750,-62.383,-62.383)
geographic The Gallery
Triplets
geographic_facet The Gallery
Triplets
genre DML
genre_facet DML
op_source Wang , X , Hua , Y , Kodirov , E , Hu , G , Garnier , R & Robertson , N M 2020 , Ranked List Loss for Deep Metric Learning . in IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2019): Proceedings . IEEE/CVF Conference on Computer Vision and Pattern Recognition: Proceedings .
op_rights info:eu-repo/semantics/openAccess
_version_ 1766397479948910592