| Literature DB >> 35455103 |
Weimin Chen1, Kelvin Kian Loong Wong1, Sifan Long2,3, Zhili Sun4.
Abstract
In the field of reinforcement learning, we propose a Correct Proximal Policy Optimization (CPPO) algorithm based on the modified penalty factor β and relative entropy in order to solve the robustness and stationarity of traditional algorithms. Firstly, In the process of reinforcement learning, this paper establishes a strategy evaluation mechanism through the policy distribution function. Secondly, the state space function is quantified by introducing entropy, whereby the approximation policy is used to approximate the real policy distribution, and the kernel function estimation and calculation of relative entropy is used to fit the reward function based on complex problem. Finally, through the comparative analysis on the classic test cases, we demonstrated that our proposed algorithm is effective, has a faster convergence speed and better performance than the traditional PPO algorithm, and the measure of the relative entropy can show the differences. In addition, it can more efficiently use the information of complex environment to learn policies. At the same time, not only can our paper explain the rationality of the policy distribution theory, the proposed framework can also balance between iteration steps, computational complexity and convergence speed, and we also introduced an effective measure of performance using the relative entropy concept.Entities:
Keywords: approximation theory; correct proximal policy optimization; entropy; optimization; policy gradient; reinforcement learning
Year: 2022 PMID: 35455103 PMCID: PMC9031020 DOI: 10.3390/e24040440
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Figure 2Traditional PPO algorithms in the Atari 2600 game whereby the OpenAI experimental platform is used here, which is a comprehensive experimental platform for testing the new algorithm. (a) PPO algorithm in Alien-ram-v0. (b) Reward of PPO algorithm in Alien-ram-v0.
Figure 3Implementing effect of CPPO algorithm on the Atari 2600 game in the initial stage. (a) Alien-ram-v0. (b) Asterix-v0. (c) Enduro-v0. (d) SpaceInvader-ram-v0.
Figure 4Performance of the algorithm in the Atari 2600 game rewards. (a) Alien-ram-v0. (b) Asterix-v0. (c) Enduro-v0. (d) SpaceInvader-ram-v0.
Figure 5Loss of the algorithm in the Atari 2600 game demonstrating how the CPPO algorithm and PPO algorithm can lose with every iteration of time step. (a) Alien−ram−v0. (b) Asterix−v0. (c) Enduro−v0. (d) SpaceInvader−ram−v0.
CPPO and PPO algorithm achieving total reward based on the same number of iterations.
| Performance Game Items | Alien−ram−v0 | Asterix−v0 | Enduro−v0 | SpaceInvader−ram−v0 |
|---|---|---|---|---|
| CPPO | 226,214 | 183,496 | 267,548 | 175,857 |
| PPO | 45,931 | 81,578 | 221,451 | 43,571 |
| Number of iterations | 1200 | 1200 | 1200 | 1000 |
| Iteration time | ≥6.5 h | ≥7 h | ≥7.5 h | ≤5 h |