| Literature DB >> 23227142 |
Ali Sengül1, Michiel van Elk, Giulio Rognini, Jane Elizabeth Aspell, Hannes Bleuler, Olaf Blanke.
Abstract
The effects of real-world tool use on body or space representations are relatively well established in cognitive neuroscience. Several studies have shown, for example, that active tool use results in a facilitated integration of multisensory information in peripersonal space, i.e. the space directly surrounding the body. However, it remains unknown to what extent similar mechanisms apply to the use of virtual-robotic tools, such as those used in the field of surgical robotics, in which a surgeon may use bimanual haptic interfaces to control a surgery robot at a remote location. This paper presents two experiments in which participants used a haptic handle, originally designed for a commercial surgery robot, to control a virtual tool. The integration of multisensory information related to the virtual-robotic tool was assessed by means of the crossmodal congruency task, in which subjects responded to tactile vibrations applied to their fingers while ignoring visual distractors superimposed on the tip of the virtual-robotic tool. Our results show that active virtual-robotic tool use changes the spatial modulation of the crossmodal congruency effects, comparable to changes in the representation of peripersonal space observed during real-world tool use. Moreover, when the virtual-robotic tools were held in a crossed position, the visual distractors interfered strongly with tactile stimuli that was connected with the hand via the tool, reflecting a remapping of peripersonal space. Such remapping was not only observed when the virtual-robotic tools were actively used (Experiment 1), but also when passively held the tools (Experiment 2). The present study extends earlier findings on the extension of peripersonal space from physical and pointing tools to virtual-robotic tools using techniques from haptics and virtual reality. We discuss our data with respect to learning and human factors in the field of surgical robotics and discuss the use of new technologies in the field of cognitive neuroscience.Entities:
Mesh:
Year: 2012 PMID: 23227142 PMCID: PMC3515602 DOI: 10.1371/journal.pone.0049473
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Figure 1Virtual reality views and experimental setup used in the experiments.
(A) Virtual tools in an uncrossed posture: the small balls on the upper and lower part of the tools are the visual distractors. They are presented simultaneously with the vibrotactile stimuli to distract the participants. They can be presented at the same positions as the vibrotactile stimuli (congruent) or at different positions (incongruent). (B) Virtual tools in a crossed posture: The big balls in the middle of the tools have two functions. First they indicate that the CCE phase is finished. Second they indicate the position to locate the tools to keep the distance between each tool constant. The big cross in the middle of the tools is the fixation point. (C, D) A cable driven haptic device (the Da Vinci Simulator) with a large workspace was used. Participants interacted with the virtual object through the handles of the device. Their interactions were shown through a head mounted display. To mask the noise of the vibrators and environmental noise, headphones were used to present white noise. Participants responded to vibrotactile stimuli using the foot pedal. A chin rest system was used to prevent undesired movement of the head. The participant has given written informed consent (according to the PLoS guidelines) for the publication of her picture.
Figure 2Crossmodal congruency effect (CCE) with standard error in Experiment 1.
The CCE was calculated as incongruent reaction times minus congruent reaction times. White bars represent the condition in which visual stimuli were presented to the same visual hemifield with tactile stimuli, black bars represent trials in which the visual stimuli were presented to the different hemifield. The bars on the left side are for the uncrossed posture and bars on the right side are for the crossed posture.
Mean reaction times (RT) in milliseconds, percentage of errors (%) and inverse efficiency (IE) for Experiment 1.
| Experiment 1 | |||
| Same Side | |||
| Tool Posture | Congruent | Incongruent | Mean CCE |
| RT | 682.4(26.2) | 730.8(21.7) | 48.3(10.9) |
| Crossed % | 1.43(0.63) | 1.59(0.49) | 0.16(0.71) |
| IE | 691.6(26.3) | 741.3(19.2) | 49.7(11.8) |
| RT | 678.8(25.9) | 764.1(32.5) | 85.4(15.0) |
| Uncrossed % | 0.94(0.49) | 4.57(0.97) | 3.63(1.12) |
| IE | 686.1(27.3) | 804.2(41.1) | 118.1(20.7) |
The left column represents data for congruent conditions, the middle column for incongruent conditions. The right column represents the crossmodal congruency effect (CCE; i.e. difference between incongruent and congruent conditions). The first rows represent data for the crossed tool posture and the second rows represent data for the uncrossed posture. The upper panel represents data for visual stimuli at the same side as the tactile vibrations, the lower panel represents data for visual stimuli at the different side compared to the tactile vibrations. Values in parentheses are standard errors.
Figure 3Crossmodal congruency effect (CCE) with standard error in Experiment 2.
The CCE was calculated as incongruent reaction times minus congruent reaction times. White bars represent the condition in which visual stimuli were presented to the same visual hemifield with tactile stimuli, black bars represent trials in which the visual stimuli were presented to the different hemifield. The bars on the left side are for the uncrossed posture and on bars on the right side are for the crossed posture.
Mean reaction times (RT) in milliseconds, percentage of errors (%) and inverse efficiency (IE) for Experiment 2.
| Experiment 2 | |||
| Same Side | |||
| Tool Posture | Congruent | Incongruent | Mean CCE |
| RT | 601.0(19.7) | 656.7(22.6) | 55.7(8.8) |
| Crossed % | 1.33 (0.55) | 3.79 (1.02) | 2.46 (1.22) |
| IE | 609.2(20.6) | 681.8(21.7) | 72.7(4.5) |
| RT | 591.7(18.0) | 683.5(24.8) | 91.8(11.1) |
| Uncrossed % | 2.63 (0.92) | 9.62 (0.99) | 6.99 (1.13) |
| IE | 607.7(19.2) | 754.4(26.4) | 146.7(14.6) |
The left column represents data for congruent conditions, the middle column for incongruent conditions. The right column represents the crossmodal congruency effect (CCE; i.e. difference between incongruent and congruent conditions). The first rows represent data for the crossed tool posture, the second rows represent data for the uncrossed posture. The upper panel represents data for visual stimuli at the same side as the tactile vibrations, the lower panel represents data for visual stimuli at the different side compared to the tactile vibrations. Values in parentheses are standard errors.