Distance metric learning with eigenvalue optimization

The main theme of this paper is to develop a novel eigenvalue optimization framework for learning a Mahalanobis metric. Within this context, we introduce a novel metric learning approach called DML-eig which is shown to be equivalent to a well-known eigenvalue optimization problem called minimizing...

Full description

Bibliographic Details
Main Authors: Yiming Ying, Peng Li, Sören Sonnenburg, Francis Bach, Cheng Soon Ong
Other Authors: The Pennsylvania State University CiteSeerX Archives
Format: Text
Language:English
Published: 2012
Subjects:
DML
Online Access:http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.413.7512
http://jmlr.org/papers/volume13/ying12a/ying12a.pdf
id ftciteseerx:oai:CiteSeerX.psu:10.1.1.413.7512
record_format openpolar
spelling ftciteseerx:oai:CiteSeerX.psu:10.1.1.413.7512 2023-05-15T16:01:38+02:00 Distance metric learning with eigenvalue optimization Yiming Ying Peng Li Sören Sonnenburg Francis Bach Cheng Soon Ong The Pennsylvania State University CiteSeerX Archives 2012 application/pdf http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.413.7512 http://jmlr.org/papers/volume13/ying12a/ying12a.pdf en eng http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.413.7512 http://jmlr.org/papers/volume13/ying12a/ying12a.pdf Metadata may be used without restrictions as long as the oai identifier remains attached to it. http://jmlr.org/papers/volume13/ying12a/ying12a.pdf metric learning convex optimization semi-definite programming first-order methods eigenvalue optimization text 2012 ftciteseerx 2016-01-08T03:29:38Z The main theme of this paper is to develop a novel eigenvalue optimization framework for learning a Mahalanobis metric. Within this context, we introduce a novel metric learning approach called DML-eig which is shown to be equivalent to a well-known eigenvalue optimization problem called minimizing the maximal eigenvalue of a symmetric matrix (Overton, 1988; Lewis and Overton, 1996). Moreover, we formulate LMNN (Weinberger et al., 2005), one of the state-of-the-art metric learning methods, as a similar eigenvalue optimization problem. This novel framework not only provides new insights into metric learning but also opens new avenues to the design of efficient metric learning algorithms. Indeed, first-order algorithms are developed for DML-eig and LMNN which only need the computation of the largest eigenvector of a matrix per iteration. Their convergence characteristics are rigorously established. Various experiments on benchmark data sets show the competitive performance of our new approaches. In addition, we report an encouraging result on a difficult and challenging face verification data set called Labeled Faces in the Wild (LFW). Text DML Unknown
institution Open Polar
collection Unknown
op_collection_id ftciteseerx
language English
topic metric learning
convex optimization
semi-definite programming
first-order methods
eigenvalue optimization
spellingShingle metric learning
convex optimization
semi-definite programming
first-order methods
eigenvalue optimization
Yiming Ying
Peng Li
Sören Sonnenburg
Francis Bach
Cheng Soon Ong
Distance metric learning with eigenvalue optimization
topic_facet metric learning
convex optimization
semi-definite programming
first-order methods
eigenvalue optimization
description The main theme of this paper is to develop a novel eigenvalue optimization framework for learning a Mahalanobis metric. Within this context, we introduce a novel metric learning approach called DML-eig which is shown to be equivalent to a well-known eigenvalue optimization problem called minimizing the maximal eigenvalue of a symmetric matrix (Overton, 1988; Lewis and Overton, 1996). Moreover, we formulate LMNN (Weinberger et al., 2005), one of the state-of-the-art metric learning methods, as a similar eigenvalue optimization problem. This novel framework not only provides new insights into metric learning but also opens new avenues to the design of efficient metric learning algorithms. Indeed, first-order algorithms are developed for DML-eig and LMNN which only need the computation of the largest eigenvector of a matrix per iteration. Their convergence characteristics are rigorously established. Various experiments on benchmark data sets show the competitive performance of our new approaches. In addition, we report an encouraging result on a difficult and challenging face verification data set called Labeled Faces in the Wild (LFW).
author2 The Pennsylvania State University CiteSeerX Archives
format Text
author Yiming Ying
Peng Li
Sören Sonnenburg
Francis Bach
Cheng Soon Ong
author_facet Yiming Ying
Peng Li
Sören Sonnenburg
Francis Bach
Cheng Soon Ong
author_sort Yiming Ying
title Distance metric learning with eigenvalue optimization
title_short Distance metric learning with eigenvalue optimization
title_full Distance metric learning with eigenvalue optimization
title_fullStr Distance metric learning with eigenvalue optimization
title_full_unstemmed Distance metric learning with eigenvalue optimization
title_sort distance metric learning with eigenvalue optimization
publishDate 2012
url http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.413.7512
http://jmlr.org/papers/volume13/ying12a/ying12a.pdf
genre DML
genre_facet DML
op_source http://jmlr.org/papers/volume13/ying12a/ying12a.pdf
op_relation http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.413.7512
http://jmlr.org/papers/volume13/ying12a/ying12a.pdf
op_rights Metadata may be used without restrictions as long as the oai identifier remains attached to it.
_version_ 1766397404340289536