Literature DB >> 35392407

Pursuit and Evasion Strategy of a Differential Game Based on Deep Reinforcement Learning.

Can Xu1, Yin Zhang2, Weigang Wang1,3, Ligang Dong2.   

Abstract

Since the emergence of deep neural network (DNN), it has achieved excellent performance in various research areas. As the combination of DNN and reinforcement learning, deep reinforcement learning (DRL) becomes a new paradigm for solving differential game problems. In this study, we build up a reinforcement learning environment and apply relevant DRL methods to a specific bio-inspired differential game problem: the dog sheep game. The dog sheep game environment is set on a circle where the dog chases down the sheep attempting to escape. According to some presuppositions, we are able to acquire the kinematic pursuit and evasion strategy. Next, this study implements the value-based deep Q network (DQN) model and the deep deterministic policy gradient (DDPG) model to the dog sheep game, attempting to endow the sheep the ability to escape successfully. To enhance the performance of the DQN model, this study brought up the reward mechanism with a time-out strategy and the game environment with an attenuation mechanism of the steering angle of sheep. These modifications effectively increase the probability of escape for the sheep. Furthermore, the DDPG model is adopted due to its continuous action space. Results show the modifications of the DQN model effectively increase the escape probabilities to the same level as the DDPG model. When it comes to the learning ability under various environment difficulties, the refined DQN and the DDPG models have bigger performance enhancement over the naive evasion model in harsh environments than in loose environments.
Copyright © 2022 Xu, Zhang, Wang and Dong.

Entities:  

Keywords:  deep Q network; deep deterministic policy gradient; deep reinforcement learning; differential game; dog sheep game

Year:  2022        PMID: 35392407      PMCID: PMC8980781          DOI: 10.3389/fbioe.2022.827408

Source DB:  PubMed          Journal:  Front Bioeng Biotechnol        ISSN: 2296-4185


  7 in total

1.  Human-level control through deep reinforcement learning.

Authors:  Volodymyr Mnih; Koray Kavukcuoglu; David Silver; Andrei A Rusu; Joel Veness; Marc G Bellemare; Alex Graves; Martin Riedmiller; Andreas K Fidjeland; Georg Ostrovski; Stig Petersen; Charles Beattie; Amir Sadik; Ioannis Antonoglou; Helen King; Dharshan Kumaran; Daan Wierstra; Shane Legg; Demis Hassabis
Journal:  Nature       Date:  2015-02-26       Impact factor: 49.962

2.  Combining Public Opinion Dissemination with Polarization Process Considering Individual Heterogeneity.

Authors:  Tinggui Chen; Jingtao Rong; Jianjun Yang; Guodong Cong; Gongfa Li
Journal:  Healthcare (Basel)       Date:  2021-02-07

3.  Genetic Algorithm-Based Trajectory Optimization for Digital Twin Robots.

Authors:  Xin Liu; Du Jiang; Bo Tao; Guozhang Jiang; Ying Sun; Jianyi Kong; Xiliang Tong; Guojun Zhao; Baojia Chen
Journal:  Front Bioeng Biotechnol       Date:  2022-01-10

4.  Intelligent Detection of Steel Defects Based on Improved Split Attention Networks.

Authors:  Zhiqiang Hao; Zhigang Wang; Dongxu Bai; Bo Tao; Xiliang Tong; Baojia Chen
Journal:  Front Bioeng Biotechnol       Date:  2022-01-13

5.  Self-Tuning Control of Manipulator Positioning Based on Fuzzy PID and PSO Algorithm.

Authors:  Ying Liu; Du Jiang; Juntong Yun; Ying Sun; Cuiqiao Li; Guozhang Jiang; Jianyi Kong; Bo Tao; Zifan Fang
Journal:  Front Bioeng Biotechnol       Date:  2022-02-11
  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.