Point-to-Set Distance Metric Learning on Deep Representations for Visual Tracking

For autonomous driving application, a car shall be able to track objects in the scene in order to estimate where and how they will move such that the tracker embedded in the car can efficiently alert the car for effective collision-avoidance. Traditional discriminative object tracking methods usuall...

Full description

Bibliographic Details
Published in:IEEE Transactions on Intelligent Transportation Systems
Main Authors: Zhang, Shengping, Qi, Yuankai, Jiang, Feng, Lan, Xiangyuan, Yuen, Pong C., Zhou, Huiyu
Format: Article in Journal/Newspaper
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE) 2018
Subjects:
DML
Online Access:http://ieeexplore.ieee.org/document/8115211/
http://hdl.handle.net/2381/41049
https://doi.org/10.1109/TITS.2017.2766093
id ftleicester:oai:lra.le.ac.uk:2381/41049
record_format openpolar
spelling ftleicester:oai:lra.le.ac.uk:2381/41049 2023-05-15T16:01:49+02:00 Point-to-Set Distance Metric Learning on Deep Representations for Visual Tracking Zhang, Shengping Qi, Yuankai Jiang, Feng Lan, Xiangyuan Yuen, Pong C. Zhou, Huiyu 2018-01-29T15:17:35Z http://ieeexplore.ieee.org/document/8115211/ http://hdl.handle.net/2381/41049 https://doi.org/10.1109/TITS.2017.2766093 en eng Institute of Electrical and Electronics Engineers (IEEE) IEEE Transactions on Intelligent Transportation Systems, 2018, 19 (1), pp. 187-198 1524-9050 1558-0016 http://ieeexplore.ieee.org/document/8115211/ http://hdl.handle.net/2381/41049 doi:10.1109/TITS.2017.2766093 Copyright © 2017, Institute of Electrical and Electronics Engineers (IEEE). Deposited with reference to the publisher’s open access archiving policy. Metric learning point to set visual tracking Journal Article 2018 ftleicester https://doi.org/10.1109/TITS.2017.2766093 2019-03-22T20:24:34Z For autonomous driving application, a car shall be able to track objects in the scene in order to estimate where and how they will move such that the tracker embedded in the car can efficiently alert the car for effective collision-avoidance. Traditional discriminative object tracking methods usually train a binary classifier via a support vector machine (SVM) scheme to distinguish the target from its background. Despite demonstrated success, the performance of the SVM-based trackers is limited because the classification is carried out only depending on support vectors (SVs) but the target's dynamic appearance may look similar to the training samples that have not been selected as SVs, especially when the training samples are not linearly classifiable. In such cases, the tracker may drift to the background and fail to track the target eventually. To address this problem, in this paper, we propose to integrate the point-to-set/image-to-imageSet distance metric learning (DML) into visual tracking tasks and take full advantage of all the training samples when determining the best target candidate. The point-to-set DML is conducted on convolutional neural network features of the training data extracted from the starting frames. When a new frame comes, target candidates are first projected to the common subspace using the learned mapping functions, and then the candidate having the minimal distance to the target template sets is selected as the tracking result. Extensive experimental results show that even without model update the proposed method is able to achieve favorable performance on challenging image sequences compared with several state-of-the-art trackers. This work was supported in part by the National Natural Science Foundation of China (Nos. 61300111 and 61672188). H. Zhou is also supported by UK EPSRC under Grants EP/G034303/1, EP/N508664/1 and EP/N011074/1. Peer-reviewed Post-print Article in Journal/Newspaper DML University of Leicester: Leicester Research Archive (LRA) IEEE Transactions on Intelligent Transportation Systems 19 1 187 198
institution Open Polar
collection University of Leicester: Leicester Research Archive (LRA)
op_collection_id ftleicester
language English
topic Metric learning
point to set
visual tracking
spellingShingle Metric learning
point to set
visual tracking
Zhang, Shengping
Qi, Yuankai
Jiang, Feng
Lan, Xiangyuan
Yuen, Pong C.
Zhou, Huiyu
Point-to-Set Distance Metric Learning on Deep Representations for Visual Tracking
topic_facet Metric learning
point to set
visual tracking
description For autonomous driving application, a car shall be able to track objects in the scene in order to estimate where and how they will move such that the tracker embedded in the car can efficiently alert the car for effective collision-avoidance. Traditional discriminative object tracking methods usually train a binary classifier via a support vector machine (SVM) scheme to distinguish the target from its background. Despite demonstrated success, the performance of the SVM-based trackers is limited because the classification is carried out only depending on support vectors (SVs) but the target's dynamic appearance may look similar to the training samples that have not been selected as SVs, especially when the training samples are not linearly classifiable. In such cases, the tracker may drift to the background and fail to track the target eventually. To address this problem, in this paper, we propose to integrate the point-to-set/image-to-imageSet distance metric learning (DML) into visual tracking tasks and take full advantage of all the training samples when determining the best target candidate. The point-to-set DML is conducted on convolutional neural network features of the training data extracted from the starting frames. When a new frame comes, target candidates are first projected to the common subspace using the learned mapping functions, and then the candidate having the minimal distance to the target template sets is selected as the tracking result. Extensive experimental results show that even without model update the proposed method is able to achieve favorable performance on challenging image sequences compared with several state-of-the-art trackers. This work was supported in part by the National Natural Science Foundation of China (Nos. 61300111 and 61672188). H. Zhou is also supported by UK EPSRC under Grants EP/G034303/1, EP/N508664/1 and EP/N011074/1. Peer-reviewed Post-print
format Article in Journal/Newspaper
author Zhang, Shengping
Qi, Yuankai
Jiang, Feng
Lan, Xiangyuan
Yuen, Pong C.
Zhou, Huiyu
author_facet Zhang, Shengping
Qi, Yuankai
Jiang, Feng
Lan, Xiangyuan
Yuen, Pong C.
Zhou, Huiyu
author_sort Zhang, Shengping
title Point-to-Set Distance Metric Learning on Deep Representations for Visual Tracking
title_short Point-to-Set Distance Metric Learning on Deep Representations for Visual Tracking
title_full Point-to-Set Distance Metric Learning on Deep Representations for Visual Tracking
title_fullStr Point-to-Set Distance Metric Learning on Deep Representations for Visual Tracking
title_full_unstemmed Point-to-Set Distance Metric Learning on Deep Representations for Visual Tracking
title_sort point-to-set distance metric learning on deep representations for visual tracking
publisher Institute of Electrical and Electronics Engineers (IEEE)
publishDate 2018
url http://ieeexplore.ieee.org/document/8115211/
http://hdl.handle.net/2381/41049
https://doi.org/10.1109/TITS.2017.2766093
genre DML
genre_facet DML
op_relation IEEE Transactions on Intelligent Transportation Systems, 2018, 19 (1), pp. 187-198
1524-9050
1558-0016
http://ieeexplore.ieee.org/document/8115211/
http://hdl.handle.net/2381/41049
doi:10.1109/TITS.2017.2766093
op_rights Copyright © 2017, Institute of Electrical and Electronics Engineers (IEEE). Deposited with reference to the publisher’s open access archiving policy.
op_doi https://doi.org/10.1109/TITS.2017.2766093
container_title IEEE Transactions on Intelligent Transportation Systems
container_volume 19
container_issue 1
container_start_page 187
op_container_end_page 198
_version_ 1766397534869127168