Ranked List Loss for Deep Metric Learning

The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity and dissimilarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or...

Full description

Bibliographic Details
Published in:IEEE Transactions on Pattern Analysis and Machine Intelligence
Main Authors: Wang, Xinshao, Hua, Yang, Kodirov, Elyor, Robertson, Neil
Format: Article in Journal/Newspaper
Language:English
Published: 2021
Subjects:
DML
Online Access:https://pure.qub.ac.uk/en/publications/a18acd7a-310e-4f9c-8c34-4d1d5e3bc1b5
https://doi.org/10.1109/TPAMI.2021.3068449
https://pureadmin.qub.ac.uk/ws/files/233473445/RLL_minorR_v10_yhua.pdf
https://ieeexplore.ieee.org/document/9385896
id ftqueensubelpubl:oai:pure.qub.ac.uk/portal:publications/a18acd7a-310e-4f9c-8c34-4d1d5e3bc1b5
record_format openpolar
spelling ftqueensubelpubl:oai:pure.qub.ac.uk/portal:publications/a18acd7a-310e-4f9c-8c34-4d1d5e3bc1b5 2024-05-19T07:39:28+00:00 Ranked List Loss for Deep Metric Learning Wang, Xinshao Hua, Yang Kodirov, Elyor Robertson, Neil 2021-03-24 application/pdf https://pure.qub.ac.uk/en/publications/a18acd7a-310e-4f9c-8c34-4d1d5e3bc1b5 https://doi.org/10.1109/TPAMI.2021.3068449 https://pureadmin.qub.ac.uk/ws/files/233473445/RLL_minorR_v10_yhua.pdf https://ieeexplore.ieee.org/document/9385896 eng eng https://pure.qub.ac.uk/en/publications/a18acd7a-310e-4f9c-8c34-4d1d5e3bc1b5 info:eu-repo/semantics/openAccess Wang , X , Hua , Y , Kodirov , E & Robertson , N 2021 , ' Ranked List Loss for Deep Metric Learning ' , IEEE Transactions on Pattern Analysis and Machine Intelligence . https://doi.org/10.1109/TPAMI.2021.3068449 article 2021 ftqueensubelpubl https://doi.org/10.1109/TPAMI.2021.3068449 2024-04-25T00:14:08Z The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity and dissimilarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, ranking-motivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. They converge faster and achieve state-of-the-art performance. In this work, we unveil two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. Consequently, some useful examples are ignored and the structure is less informative. To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery. The learning setting can be interpreted as few-shot retrieval: given a mini-batch, every example is iteratively used as a query, and the rest ones compose the galley to search, i.e., the support set in few-shot setting. The rest examples are split into a positive set and a negative set. For every mini-batch, the learning objective of ranked list loss is to make the query closer to the positive set than to the negative set by a margin. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution tends to be extremely compressed. In contrast, we propose to learn a hypersphere for each class in order to preserve useful similarity structure inside it, which functions as regularisation. Extensive experiments demonstrate the superiority of our proposal by comparing with the state-of-the-art methods on the fine-grained image retrieval task. Our source code is available online: https://github.com/XinshaoAmos Wang/Ranked-List-Loss-for-DML Article in Journal/Newspaper DML Queen's University Belfast Research Portal IEEE Transactions on Pattern Analysis and Machine Intelligence 1 1
institution Open Polar
collection Queen's University Belfast Research Portal
op_collection_id ftqueensubelpubl
language English
description The objective of deep metric learning (DML) is to learn embeddings that can capture semantic similarity and dissimilarity information among data points. Existing pairwise or tripletwise loss functions used in DML are known to suffer from slow convergence due to a large proportion of trivial pairs or triplets as the model improves. To improve this, ranking-motivated structured losses are proposed recently to incorporate multiple examples and exploit the structured information among them. They converge faster and achieve state-of-the-art performance. In this work, we unveil two limitations of existing ranking-motivated structured losses and propose a novel ranked list loss to solve both of them. First, given a query, only a fraction of data points is incorporated to build the similarity structure. Consequently, some useful examples are ignored and the structure is less informative. To address this, we propose to build a set-based similarity structure by exploiting all instances in the gallery. The learning setting can be interpreted as few-shot retrieval: given a mini-batch, every example is iteratively used as a query, and the rest ones compose the galley to search, i.e., the support set in few-shot setting. The rest examples are split into a positive set and a negative set. For every mini-batch, the learning objective of ranked list loss is to make the query closer to the positive set than to the negative set by a margin. Second, previous methods aim to pull positive pairs as close as possible in the embedding space. As a result, the intraclass data distribution tends to be extremely compressed. In contrast, we propose to learn a hypersphere for each class in order to preserve useful similarity structure inside it, which functions as regularisation. Extensive experiments demonstrate the superiority of our proposal by comparing with the state-of-the-art methods on the fine-grained image retrieval task. Our source code is available online: https://github.com/XinshaoAmos Wang/Ranked-List-Loss-for-DML
format Article in Journal/Newspaper
author Wang, Xinshao
Hua, Yang
Kodirov, Elyor
Robertson, Neil
spellingShingle Wang, Xinshao
Hua, Yang
Kodirov, Elyor
Robertson, Neil
Ranked List Loss for Deep Metric Learning
author_facet Wang, Xinshao
Hua, Yang
Kodirov, Elyor
Robertson, Neil
author_sort Wang, Xinshao
title Ranked List Loss for Deep Metric Learning
title_short Ranked List Loss for Deep Metric Learning
title_full Ranked List Loss for Deep Metric Learning
title_fullStr Ranked List Loss for Deep Metric Learning
title_full_unstemmed Ranked List Loss for Deep Metric Learning
title_sort ranked list loss for deep metric learning
publishDate 2021
url https://pure.qub.ac.uk/en/publications/a18acd7a-310e-4f9c-8c34-4d1d5e3bc1b5
https://doi.org/10.1109/TPAMI.2021.3068449
https://pureadmin.qub.ac.uk/ws/files/233473445/RLL_minorR_v10_yhua.pdf
https://ieeexplore.ieee.org/document/9385896
genre DML
genre_facet DML
op_source Wang , X , Hua , Y , Kodirov , E & Robertson , N 2021 , ' Ranked List Loss for Deep Metric Learning ' , IEEE Transactions on Pattern Analysis and Machine Intelligence . https://doi.org/10.1109/TPAMI.2021.3068449
op_relation https://pure.qub.ac.uk/en/publications/a18acd7a-310e-4f9c-8c34-4d1d5e3bc1b5
op_rights info:eu-repo/semantics/openAccess
op_doi https://doi.org/10.1109/TPAMI.2021.3068449
container_title IEEE Transactions on Pattern Analysis and Machine Intelligence
container_start_page 1
op_container_end_page 1
_version_ 1799479046854148096