Literature DB >> 30762532

End-to-End Active Object Tracking and Its Real-World Deployment via Reinforcement Learning.

Wenhan Luo, Peng Sun, Fangwei Zhong, Wei Liu, Tong Zhang, Yizhou Wang.   

Abstract

We study active object tracking, where a tracker takes visual observations (i.e., frame sequences) as input and produces the corresponding camera control signals as output (e.g., move forward, turn left, etc.). Conventional methods tackle tracking and camera control tasks separately, and the resulting system is difficult to tune jointly. These methods also require significant human efforts for image labeling and expensive trial-and-error system tuning in the real world. To address these issues, we propose, in this paper, an end-to-end solution via deep reinforcement learning. A ConvNet-LSTM function approximator is adopted for the direct frame-to-action prediction. We further propose an environment augmentation technique and a customized reward function, which are crucial for successful training. The tracker trained in simulators (ViZDoom and Unreal Engine) demonstrates good generalization behaviors in the case of unseen object moving paths, unseen object appearances, unseen backgrounds, and distracting objects. The system is robust and can restore tracking after occasional lost of the target being tracked. We also find that the tracking ability, obtained solely from simulators, can potentially transfer to real-world scenarios. We demonstrate successful examples of such transfer, via experiments over the VOT dataset and the deployment of a real-world robot using the proposed active tracker trained in simulation.

Entities:  

Mesh:

Year:  2019        PMID: 30762532     DOI: 10.1109/TPAMI.2019.2899570

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  3 in total

Review 1.  Multiple-target tracking in human and machine vision.

Authors:  Shiva Kamkar; Fatemeh Ghezloo; Hamid Abrishami Moghaddam; Ali Borji; Reza Lashgari
Journal:  PLoS Comput Biol       Date:  2020-04-09       Impact factor: 4.475

2.  Extreme Low-Light Image Enhancement for Surveillance Cameras Using Attention U-Net.

Authors:  Sophy Ai; Jangwoo Kwon
Journal:  Sensors (Basel)       Date:  2020-01-15       Impact factor: 3.576

3.  Perception-Action Coupling Target Tracking Control for a Snake Robot via Reinforcement Learning.

Authors:  Zhenshan Bing; Christian Lemke; Fabric O Morin; Zhuangyi Jiang; Long Cheng; Kai Huang; Alois Knoll
Journal:  Front Neurorobot       Date:  2020-10-20       Impact factor: 2.650

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.