| Literature DB >> 33501265 |
Paolo Pagliuca1, Nicola Milano1, Stefano Nolfi1,2.
Abstract
We analyze the efficacy of modern neuro-evolutionary strategies for continuous control optimization. Overall, the results collected on a wide variety of qualitatively different benchmark problems indicate that these methods are generally effective and scale well with respect to the number of parameters and the complexity of the problem. Moreover, they are relatively robust with respect to the setting of hyper-parameters. The comparison of the most promising methods indicates that the OpenAI-ES algorithm outperforms or equals the other algorithms on all considered problems. Moreover, we demonstrate how the reward functions optimized for reinforcement learning methods are not necessarily effective for evolutionary strategies and vice versa. This finding can lead to reconsideration of the relative efficacy of the two classes of algorithm since it implies that the comparisons performed to date are biased toward one or the other class.Entities:
Keywords: continuous control optimization; evolutionary strategies; fitness function design; natural evolutionary strategies; reinforcement learning
Year: 2020 PMID: 33501265 PMCID: PMC7805676 DOI: 10.3389/frobt.2020.00098
Source DB: PubMed Journal: Front Robot AI ISSN: 2296-9144