IDEAL: Independent Domain Embedding Augmentation Learning

Many efforts have been devoted to designing sampling, mining, and weighting strategies in high-level deep metric learning (DML) loss objectives. However, little attention has been paid to low-level but essential data transformation. In this paper, we develop a novel mechanism, the independent domain...

Full description

Bibliographic Details
Main Authors: Chen, Zhiyuan, Yao, Guang, Ma, Wennan, Xu, Lin
Format: Article in Journal/Newspaper
Language:unknown
Published: arXiv 2021
Subjects:
DML
Online Access:https://dx.doi.org/10.48550/arxiv.2105.10112
https://arxiv.org/abs/2105.10112
id ftdatacite:10.48550/arxiv.2105.10112
record_format openpolar
spelling ftdatacite:10.48550/arxiv.2105.10112 2023-05-15T16:01:22+02:00 IDEAL: Independent Domain Embedding Augmentation Learning Chen, Zhiyuan Yao, Guang Ma, Wennan Xu, Lin 2021 https://dx.doi.org/10.48550/arxiv.2105.10112 https://arxiv.org/abs/2105.10112 unknown arXiv Creative Commons Attribution 4.0 International https://creativecommons.org/licenses/by/4.0/legalcode cc-by-4.0 CC-BY Computer Vision and Pattern Recognition cs.CV Artificial Intelligence cs.AI FOS Computer and information sciences Article CreativeWork article Preprint 2021 ftdatacite https://doi.org/10.48550/arxiv.2105.10112 2022-03-10T14:24:23Z Many efforts have been devoted to designing sampling, mining, and weighting strategies in high-level deep metric learning (DML) loss objectives. However, little attention has been paid to low-level but essential data transformation. In this paper, we develop a novel mechanism, the independent domain embedding augmentation learning ({IDEAL}) method. It can simultaneously learn multiple independent embedding spaces for multiple domains generated by predefined data transformations. Our IDEAL is orthogonal to existing DML techniques and can be seamlessly combined with prior DML approaches for enhanced performance. Empirical results on visual retrieval tasks demonstrate the superiority of the proposed method. For example, the IDEAL improves the performance of MS loss by a large margin, 84.5\% $\rightarrow$ 87.1\% on Cars-196, and 65.8\% $\rightarrow$ 69.5\% on CUB-200 at Recall$@1$. Our IDEAL with MS loss also achieves the new state-of-the-art performance on three image retrieval benchmarks, \ie, \emph{Cars-196}, \emph{CUB-200}, and \emph{SOP}. It outperforms the most recent DML approaches, such as Circle loss and XBM, significantly. The source code and pre-trained models of our method will be available at\emph{\url{https://github.com/emdata-ailab/IDEAL}}. : 11 pages, 2 figures, 4 tables Article in Journal/Newspaper DML DataCite Metadata Store (German National Library of Science and Technology)
institution Open Polar
collection DataCite Metadata Store (German National Library of Science and Technology)
op_collection_id ftdatacite
language unknown
topic Computer Vision and Pattern Recognition cs.CV
Artificial Intelligence cs.AI
FOS Computer and information sciences
spellingShingle Computer Vision and Pattern Recognition cs.CV
Artificial Intelligence cs.AI
FOS Computer and information sciences
Chen, Zhiyuan
Yao, Guang
Ma, Wennan
Xu, Lin
IDEAL: Independent Domain Embedding Augmentation Learning
topic_facet Computer Vision and Pattern Recognition cs.CV
Artificial Intelligence cs.AI
FOS Computer and information sciences
description Many efforts have been devoted to designing sampling, mining, and weighting strategies in high-level deep metric learning (DML) loss objectives. However, little attention has been paid to low-level but essential data transformation. In this paper, we develop a novel mechanism, the independent domain embedding augmentation learning ({IDEAL}) method. It can simultaneously learn multiple independent embedding spaces for multiple domains generated by predefined data transformations. Our IDEAL is orthogonal to existing DML techniques and can be seamlessly combined with prior DML approaches for enhanced performance. Empirical results on visual retrieval tasks demonstrate the superiority of the proposed method. For example, the IDEAL improves the performance of MS loss by a large margin, 84.5\% $\rightarrow$ 87.1\% on Cars-196, and 65.8\% $\rightarrow$ 69.5\% on CUB-200 at Recall$@1$. Our IDEAL with MS loss also achieves the new state-of-the-art performance on three image retrieval benchmarks, \ie, \emph{Cars-196}, \emph{CUB-200}, and \emph{SOP}. It outperforms the most recent DML approaches, such as Circle loss and XBM, significantly. The source code and pre-trained models of our method will be available at\emph{\url{https://github.com/emdata-ailab/IDEAL}}. : 11 pages, 2 figures, 4 tables
format Article in Journal/Newspaper
author Chen, Zhiyuan
Yao, Guang
Ma, Wennan
Xu, Lin
author_facet Chen, Zhiyuan
Yao, Guang
Ma, Wennan
Xu, Lin
author_sort Chen, Zhiyuan
title IDEAL: Independent Domain Embedding Augmentation Learning
title_short IDEAL: Independent Domain Embedding Augmentation Learning
title_full IDEAL: Independent Domain Embedding Augmentation Learning
title_fullStr IDEAL: Independent Domain Embedding Augmentation Learning
title_full_unstemmed IDEAL: Independent Domain Embedding Augmentation Learning
title_sort ideal: independent domain embedding augmentation learning
publisher arXiv
publishDate 2021
url https://dx.doi.org/10.48550/arxiv.2105.10112
https://arxiv.org/abs/2105.10112
genre DML
genre_facet DML
op_rights Creative Commons Attribution 4.0 International
https://creativecommons.org/licenses/by/4.0/legalcode
cc-by-4.0
op_rightsnorm CC-BY
op_doi https://doi.org/10.48550/arxiv.2105.10112
_version_ 1766397260694814720