Literature DB >> 30268059

State representation learning for control: An overview.

Timothée Lesort1, Natalia Díaz-Rodríguez2, Jean-Frano Is Goudou3, David Filliat4.   

Abstract

Representation learning algorithms are designed to learn abstract features that characterize data. State representation learning (SRL) focuses on a particular kind of representation learning where learned features are in low dimension, evolve through time, and are influenced by actions of an agent. The representation is learned to capture the variation in the environment generated by the agent's actions; this kind of representation is particularly suitable for robotics and control scenarios. In particular, the low dimension characteristic of the representation helps to overcome the curse of dimensionality, provides easier interpretation and utilization by humans and can help improve performance and speed in policy learning algorithms such as reinforcement learning. This survey aims at covering the state-of-the-art on state representation learning in the most recent years. It reviews different SRL methods that involve interaction with the environment, their implementations and their applications in robotics control tasks (simulated or real). In particular, it highlights how generic learning objectives are differently exploited in the reviewed algorithms. Finally, it discusses evaluation methods to assess the representation learned and summarizes current and future lines of research.
Copyright © 2018 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Disentanglement of control factors; Learning disentangled representations; Low dimensional embedding learning; Reinforcement learning; Robotics; State representation learning

Mesh:

Year:  2018        PMID: 30268059     DOI: 10.1016/j.neunet.2018.07.006

Source DB:  PubMed          Journal:  Neural Netw        ISSN: 0893-6080


  4 in total

1.  Exploratory State Representation Learning.

Authors:  Astrid Merckling; Nicolas Perrin-Gilbert; Alex Coninx; Stéphane Doncieux
Journal:  Front Robot AI       Date:  2022-02-14

2.  Using deep reinforcement learning to reveal how the brain encodes abstract state-space representations in high-dimensional environments.

Authors:  Logan Cross; Jeff Cockburn; Yisong Yue; John P O'Doherty
Journal:  Neuron       Date:  2020-12-15       Impact factor: 17.173

3.  From Semantics to Execution: Integrating Action Planning With Reinforcement Learning for Robotic Causal Problem-Solving.

Authors:  Manfred Eppe; Phuong D H Nguyen; Stefan Wermter
Journal:  Front Robot AI       Date:  2019-11-26

4.  Visual Pretraining via Contrastive Predictive Model for Pixel-Based Reinforcement Learning.

Authors:  Tung M Luu; Thang Vu; Thanh Nguyen; Chang D Yoo
Journal:  Sensors (Basel)       Date:  2022-08-29       Impact factor: 3.847

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.