Literature DB >> 30106709

Robust Visual Tracking via Hierarchical Convolutional Features.

Chao Ma, Jia-Bin Huang, Xiaokang Yang, Ming-Hsuan Yang.   

Abstract

Visual tracking is challenging as target objects often undergo significant appearance changes caused by deformation, abrupt motion, background clutter and occlusion. In this paper, we propose to exploit the rich hierarchical features of deep convolutional neural networks to improve the accuracy and robustness of visual tracking. Deep neural networks trained on object recognition datasets consist of multiple convolutional layers. These layers encode target appearance with different levels of abstraction. For example, the outputs of the last convolutional layers encode the semantic information of targets and such representations are invariant to significant appearance variations. However, their spatial resolutions are too coarse to precisely localize the target. In contrast, features from earlier convolutional layers provide more precise localization but are less invariant to appearance changes. We interpret the hierarchical features of convolutional layers as a nonlinear counterpart of an image pyramid representation and explicitly exploit these multiple levels of abstraction to represent target objects. Specifically, we learn adaptive correlation filters on the outputs from each convolutional layer to encode the target appearance. We infer the maximum response of each layer to locate targets in a coarse-to-fine manner. To further handle the issues with scale estimation and re-detecting target objects from tracking failures caused by heavy occlusion or out-of-the-view movement, we conservatively learn another correlation filter, that maintains a long-term memory of target appearance, as a discriminative classifier. We apply the classifier to two types of object proposals: (1) proposals with a small step size and tightly around the estimated location for scale estimation; and (2) proposals with large step size and across the whole image for target re-detection. Extensive experimental results on large-scale benchmark datasets show that the proposed algorithm performs favorably against the state-of-the-art tracking methods.

Entities:  

Year:  2018        PMID: 30106709     DOI: 10.1109/TPAMI.2018.2865311

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  3 in total

1.  Shape-Texture Debiased Training for Robust Template Matching.

Authors:  Bo Gao; Michael W Spratling
Journal:  Sensors (Basel)       Date:  2022-09-02       Impact factor: 3.847

2.  Visual Tracking via Deep Feature Fusion and Correlation Filters.

Authors:  Haoran Xia; Yuanping Zhang; Ming Yang; And Yufang Zhao
Journal:  Sensors (Basel)       Date:  2020-06-14       Impact factor: 3.576

3.  Multi-Feature Single Target Robust Tracking Fused with Particle Filter.

Authors:  Caihong Liu; Mayire Ibrayim; Askar Hamdulla
Journal:  Sensors (Basel)       Date:  2022-02-27       Impact factor: 3.576

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.