Literature DB >> 31398138

Deep Reinforcement Learning-Based Automatic Exploration for Navigation in Unknown Environment.

Haoran Li, Qichao Zhang, Dongbin Zhao.   

Abstract

This paper investigates the automatic exploration problem under the unknown environment, which is the key point of applying the robotic system to some social tasks. The solution to this problem via stacking decision rules is impossible to cover various environments and sensor properties. Learning-based control methods are adaptive for these scenarios. However, these methods are damaged by low learning efficiency and awkward transferability from simulation to reality. In this paper, we construct a general exploration framework via decomposing the exploration process into the decision, planning, and mapping modules, which increases the modularity of the robotic system. Based on this framework, we propose a deep reinforcement learning-based decision algorithm that uses a deep neural network to learning exploration strategy from the partial map. The results show that this proposed algorithm has better learning efficiency and adaptability for unknown environments. In addition, we conduct the experiments on the physical robot, and the results suggest that the learned policy can be well transferred from simulation to the real robot.

Entities:  

Year:  2019        PMID: 31398138     DOI: 10.1109/TNNLS.2019.2927869

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw Learn Syst        ISSN: 2162-237X            Impact factor:   10.451


  2 in total

1.  Searching and Tracking an Unknown Number of Targets: A Learning-Based Method Enhanced with Maps Merging.

Authors:  Peng Yan; Tao Jia; Chengchao Bai
Journal:  Sensors (Basel)       Date:  2021-02-04       Impact factor: 3.576

2.  A Soar-Based Space Exploration Algorithm for Mobile Robots.

Authors:  Fei Luo; Qin Zhou; Joel Fuentes; Weichao Ding; Chunhua Gu
Journal:  Entropy (Basel)       Date:  2022-03-19       Impact factor: 2.524

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.