| Literature DB >> 24987350 |
Emmanuele Tidoni1, Pierre Gergondet2, Abderrahmane Kheddar2, Salvatore M Aglioti1.
Abstract
Advancement in brain computer interfaces (BCI) technology allows people to actively interact in the world through surrogates. Controlling real humanoid robots using BCI as intuitively as we control our body represents a challenge for current research in robotics and neuroscience. In order to successfully interact with the environment the brain integrates multiple sensory cues to form a coherent representation of the world. Cognitive neuroscience studies demonstrate that multisensory integration may imply a gain with respect to a single modality and ultimately improve the overall sensorimotor performance. For example, reactivity to simultaneous visual and auditory stimuli may be higher than to the sum of the same stimuli delivered in isolation or in temporal sequence. Yet, knowledge about whether audio-visual integration may improve the control of a surrogate is meager. To explore this issue, we provided human footstep sounds as audio feedback to BCI users while controlling a humanoid robot. Participants were asked to steer their robot surrogate and perform a pick-and-place task through BCI-SSVEPs. We found that audio-visual synchrony between footsteps sound and actual humanoid's walk reduces the time required for steering the robot. Thus, auditory feedback congruent with the humanoid actions may improve motor decisions of the BCI's user and help in the feeling of control over it. Our results shed light on the possibility to increase robot's control through the combination of multisensory feedback to a BCI user.Entities:
Keywords: SSVEPs; brain computer interface; humanoid; motor control; sense of agency; teleoperation
Year: 2014 PMID: 24987350 PMCID: PMC4060053 DOI: 10.3389/fnbot.2014.00020
Source DB: PubMed Journal: Front Neurorobot ISSN: 1662-5218 Impact factor: 2.650
Participants answers to questions assessing the quality of the experience.
| Q1 I was in control of robot's actions | 66.88 ± 6.74 | 73.13 ± 5.08 | 67.50 ± 7.07 | 71.88 ± 5.50 |
| Q2 I paid attention to the images displayed | 71.88 ± 6.40 | 71.88 ± 6.54 | 74.38 ± 8.10 | 77.50 ± 7.91 |
| Q3 It was easy to instruct the robot about the direction where to move | 64.38 ± 7.99 | 68.75 ± 5.49 | 65.63 ± 10.41 | 62.50 ± 5.00 |
| Q4 Looking at the flashing arrows was difficult | 25.99 ± 8.02 | 27.50 ± 7.96 | 28.13 ± 9.54 | 28.13 ± 7.44 |
Numbers represent values comprised between 0 and 100 (mean ± s.e.m.)
Figure 1Participants located in Rome (Italy) controlled an HRP-2 humanoid robot located in Tsukuba (Japan) by a SSVEPs-BCI system. The subjects guided the robot from a starting position to a fist table (marked in green as “1”) to grasp the bottle and then drop the bottle as close as possible to a target location marked with two concentric circles on a second table (marked in green as “2”).
Figure 2(A) A sequence of images depicting the different sub-goals (SGs). (B) A state-flow diagram showing the Finite State Machine (FSM). Yellow arrows represent transitions initiated by the user while green arrows represent transitions initiated by the robot.
Figure 3Within the implemented graphical user interface whenever the BCI recognized a command (e.g., top arrow, forward-command), the corresponding icon's border changed color (from black to green).
Figure 4Principle of the recursive and enforced selection during SG4. For example the user first selects the “A” quarter which is then zoomed in to allow the subject to select the “3” quarter. The resulting final selection is “A.3.”
z estim, is the estimated height from the vision.
| Open the gripper |
| |
| |
| |
| |
| |
| |
| |
z current, is the current height of the robot's hand. z min, is the minimum allowed height, corresponding to the robot's physical limitations. z speed, is the speed command for the robot's hand. z speed ref, is a reference speed given before-hand. in force, is the force read from the wrist's force sensor. force threshold, is a force threshold defined before-hand obstacle detection during the phase where the user steers the robot to ease the control.
Figure 5Mean walking time to drive the robot from the first to the second table and drop the bottle. Light-gray and dark-gray columns represent Synchronous and Asynchronous footstep sound heard by participants. Error bars represent s.e.m. Asterisk indicate significant comparisons (p < 0.05).