| Literature DB >> 32153380 |
Verena V Hafner1, Pontus Loviken2,3, Antonio Pico Villalpando1, Guido Schillaci1,4,5.
Abstract
Traditionally investigated in philosophy, body ownership and agency-two main components of the minimal self-have recently gained attention from other disciplines, such as brain, cognitive and behavioral sciences, and even robotics and artificial intelligence. In robotics, intuitive human interaction in natural and dynamic environments becomes more and more important, and requires skills such as self-other distinction and an understanding of agency effects. In a previous review article, we investigated studies on mechanisms for the development of motor and cognitive skills in robots (Schillaci et al., 2016). In this review article, we argue that these mechanisms also build the foundation for an understanding of an artificial self. In particular, we look at developmental processes of the minimal self in biological systems, transfer principles of those to the development of an artificial self, and suggest metrics for agency and body ownership in an artificial self.Entities:
Keywords: artificial self; developmental robotics; minimal self; predictive processes; sense of agency; sense of body ownership
Year: 2020 PMID: 32153380 PMCID: PMC7046588 DOI: 10.3389/fnbot.2020.00005
Source DB: PubMed Journal: Front Neurorobot ISSN: 1662-5218 Impact factor: 2.650
Figure 1Curiosity-based learning method for humanoid robots using postures and regions. This image shows an example of postures learned after 30 min of online learning (Loviken et al., 2018). (A,B) Represent two independent runs, and the number indicate the state. Each state is responsible for an interval of angle ϕ, where ϕ is the torso's orientation in relation to the ground. A demonstration video can be found at this URL: https://www.youtube.com/watch?v=QzZsJxyGGIk.
Figure 2Self-body attenuation through predictive processes (Lang et al., 2018). A humanoid robot Nao is moving its arm in front of an object. The first row shows the frames recorded from its camera. The second row shows the enhanced frames, where self-body perception is attenuated. The attenuation is aided by a forward model, which anticipates the pixels where the robot arm will be visualized, after executing an intended motor command.
Figure 3An illustration of the forward model adopted in Lang et al. (2018) for generating image predictions from low-dimensional proprioceptive and motor states through a convolutional neural network. Legend: S(t): sensory state at time t. M(t): Motor command sent at time t. D: Dense, i.e., fully connected, neural network layer. C: Convolutional neural network layer. TC: Transposed Convolutional neural network layer. Every layer except the last (output) one is followed by a ReLU activation unit (not shown) (Lang et al., 2018).