Deep Metric Learning Based on Meta-Mining Strategy With Semiglobal Information

Recently, deep metric learning (DML) has achieved great success. Some existing DML methods propose adaptive sample mining strategies, which learn to weight the samples, leading to interesting performance. However, these methods suffer from a small memory (e.g., one training batch), limiting their ef...

Full description

Bibliographic Details
Published in:IEEE Transactions on Neural Networks and Learning Systems
Main Authors: Jiang, Xiruo, Liu, Sheng, Dai, Xili, Hu, Guosheng, Huang, Xingguo, Yao, Yazhou, Xie, Guo-Sen, Shao, Ling
Format: Article in Journal/Newspaper
Language:English
Published: Institute of Electrical and Electronics Engineers Inc. 2022
Subjects:
DML
Online Access:https://repository.hkust.edu.hk/ir/Record/1783.1-121522
https://doi.org/10.1109/TNNLS.2022.3202571
http://lbdiscover.ust.hk/uresolver?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rfr_id=info:sid/HKUST:SPI&rft.genre=article&rft.issn=2162-237X&rft.volume=&rft.issue=&rft.date=2022&rft.spage=1&rft.aulast=Jiang&rft.aufirst=&rft.atitle=Deep+Metric+Learning+Based+on+Meta-Mining+Strategy+With+Semiglobal+Information&rft.title=IEEE+Transactions+on+Neural+Networks+and+Learning+Systems
http://www.scopus.com/record/display.url?eid=2-s2.0-85139440557&origin=inward
http://gateway.isiknowledge.com/gateway/Gateway.cgi?GWVersion=2&SrcAuth=LinksAMR&SrcApp=PARTNER_APP&DestLinkType=FullRecord&DestApp=WOS&KeyUT=000859867000001
Description
Summary:Recently, deep metric learning (DML) has achieved great success. Some existing DML methods propose adaptive sample mining strategies, which learn to weight the samples, leading to interesting performance. However, these methods suffer from a small memory (e.g., one training batch), limiting their efficacy. In this work, we introduce a data-driven method, meta-mining strategy with semiglobal information (MMSI), to apply meta-learning to learn to weight samples during the whole training, leading to an adaptive mining strategy. To introduce richer information than one training batch only, we elaborately take advantage of the validation set of meta-learning by implicitly adding additional validation sample information to training. Furthermore, motivated by the latest self-supervised learning, we introduce a dictionary (memory) that maintains very large and diverse information. Together with the validation set, this dictionary presents much richer information to the training, leading to promising performance. In addition, we propose a new theoretical framework that can formulate pairwise and tripletwise metric learning loss functions in a unified framework. This framework brings new insights to society and facilitates us to generalize our MMSI to many existing DML methods. We conduct extensive experiments on three public datasets, CUB200-2011, Cars-196, and Stanford Online Products (SOP). Results show that our method can achieve the state of the art or very competitive performance. Our source codes have been made available at https://github.com/NUST-Machine-Intelligence-Laboratory/MMSI. IEEE