Literature DB >> 31527481

Reinforcement Learning-Based End-to-End Parking for Automatic Parking System.

Peizhi Zhang1, Lu Xiong2, Zhuoping Yu3, Peiyuan Fang4, Senwei Yan5, Jie Yao6, Yi Zhou7.   

Abstract

According to the existing mainstream automatic parking system (APS), a parking path is first planned based on the parking slot detected by the sensors. Subsequently, the path tracking module guides the vehicle to track the planned parking path. However, since the vehicle is non-linear dynamic, path tracking error inevitably occurs, leading to inclination and deviation of the parking. Accordingly, in this paper, a reinforcement learning-based end-to-end parking algorithm is proposed to achieve automatic parking. The vehicle can continuously learn and accumulate experience from numerous parking attempts and then learn the command of the optimal steering wheel angle at different parking slots. Based on this end-to-end parking, errors caused by path tracking can be avoided. Moreover, to ensure that the parking slot can be obtained continuously in the process of learning, a parking slot tracking algorithm is proposed based on the combination of vision and vehicle chassis information. Furthermore, given that the learning network output is hard to converge, and it is easy to fall into local optimum during the parking process, several reinforcement learning training methods in terms of parking conditions are developed. Lastly, by the real vehicle test, it is proved that using the proposed method can achieve a better parking attitude than using the path planning and path tracking-based method.

Entities:  

Keywords:  automatic parking system (APS); end-to-end parking; parking slot tracking; reinforcement learning

Year:  2019        PMID: 31527481      PMCID: PMC6766814          DOI: 10.3390/s19183996

Source DB:  PubMed          Journal:  Sensors (Basel)        ISSN: 1424-8220            Impact factor:   3.576


  5 in total

1.  Quantization-Based Adaptive Actor-Critic Tracking Control With Tracking Error Constraints.

Authors:  Quan-Yong Fan; Guang-Hong Yang; Dan Ye
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2017-02-01       Impact factor: 10.451

2.  Actor-Critic Off-Policy Learning for Optimal Control of Multiple-Model Discrete-Time Systems.

Authors:  Jan Skach; Bahare Kiumarsi; Frank L Lewis; Ondrej Straka
Journal:  IEEE Trans Cybern       Date:  2016-11-02       Impact factor: 11.448

3.  Control of Gene Regulatory Networks Using Bayesian Inverse Reinforcement Learning.

Authors:  Mahdi Imani; Ulisses M Braga-Neto
Journal:  IEEE/ACM Trans Comput Biol Bioinform       Date:  2018-04-26       Impact factor: 3.710

4.  Introduction to the special issue on deep reinforcement learning:An editorial.

Authors:  Ron Sun; David Silver; Gerald Tesauro; Guang-Bin Huang
Journal:  Neural Netw       Date:  2018-08-03

5.  Training an Actor-Critic Reinforcement Learning Controller for Arm Movement Using Human-Generated Rewards.

Authors:  Kathleen M Jagodnik; Philip S Thomas; Antonie J van den Bogert; Michael S Branicky; Robert F Kirsch
Journal:  IEEE Trans Neural Syst Rehabil Eng       Date:  2017-05-02       Impact factor: 3.802

  5 in total
  1 in total

1.  Autonomous Rear Parking via Rapidly Exploring Random-Tree-Based Reinforcement Learning.

Authors:  Saugat Shahi; Heoncheol Lee
Journal:  Sensors (Basel)       Date:  2022-09-02       Impact factor: 3.847

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.