Point-to-set distance metric learning on deep representations for visual tracking

For autonomous driving application, a car shall be able to track objects in the scene in order to estimate where and how they will move such that the tracker embedded in the car can efficiently alert the car for effective collision-avoidance. Traditional discriminative object tracking methods usuall...

Full description

Bibliographic Details
Published in:IEEE Transactions on Intelligent Transportation Systems
Main Authors: Zhang, Shengping, Qi, Yuankai, Jiang, Feng, Lan, Xiangyuan, Yuen, Pong C., Zhou, Huiyu
Format: Article in Journal/Newspaper
Language:English
Published: 2018
Subjects:
DML
Online Access:https://pure.qub.ac.uk/en/publications/f2dfdeab-829f-46b7-9011-848a88ac1b4c
https://doi.org/10.1109/TITS.2017.2766093
Description
Summary:For autonomous driving application, a car shall be able to track objects in the scene in order to estimate where and how they will move such that the tracker embedded in the car can efficiently alert the car for effective collision-avoidance. Traditional discriminative object tracking methods usually train a binary classifier via a support vector machine (SVM) scheme to distinguish the target from its background. Despite demonstrated success, the performance of the SVM-based trackers is limited because the classification is carried out only depending on support vectors (SVs) but the target's dynamic appearance may look similar to the training samples that have not been selected as SVs, especially when the training samples are not linearly classifiable. In such cases, the tracker may drift to the background and fail to track the target eventually. To address this problem, in this paper, we propose to integrate the point-to-set/image-to-imageSet distance metric learning (DML) into visual tracking tasks and take full advantage of all the training samples when determining the best target candidate. The point-to-set DML is conducted on convolutional neural network features of the training data extracted from the starting frames. When a new frame comes, target candidates are first projected to the common subspace using the learned mapping functions, and then the candidate having the minimal distance to the target template sets is selected as the tracking result. Extensive experimental results show that even without model update the proposed method is able to achieve favorable performance on challenging image sequences compared with several state-of-the-art trackers.