| Literature DB >> 24987381 |
Monica Maranesi1, Luca Bonini1, Leonardo Fogassi2.
Abstract
The perception of objects does not rely only on visual brain areas, but also involves cortical motor regions. In particular, different parietal and premotor areas host neurons discharging during both object observation and grasping. Most of these cells often show similar visual and motor selectivity for a specific object (or set of objects), suggesting that they might play a crucial role in representing the "potential motor act" afforded by the object. The existence of such a mechanism for the visuomotor transformation of object physical properties in the most appropriate motor plan for interacting with them has been convincingly demonstrated in humans as well. Interestingly, human studies have shown that visually presented objects can automatically trigger the representation of an action provided that they are located within the observer's reaching space (peripersonal space). The "affordance effect" also occurs when the presented object is outside the observer's peripersonal space, but inside the peripersonal space of an observed agent. These findings recently received direct support by single neuron studies in monkey, indicating that space-constrained processing of objects in the ventral premotor cortex might be relevant to represent objects as potential targets for one's own or others' action.Entities:
Keywords: grasping; perception; sensorimotor transformation; space; visual streams
Year: 2014 PMID: 24987381 PMCID: PMC4060298 DOI: 10.3389/fpsyg.2014.00538
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Figure 1(A) Box and apparatus (seen from the monkey's point of view) settled for carrying out the visuomotor task (VMT), the observation task in the monkey's extrapersonal (OTe) and peripersonal (OTp) space. (B) Task phases of Action and Fixation conditions. Each trial started when the monkey had its hand in the starting position. A fixation point was presented and the monkey was required to fixate it for the entire duration of the trial. One of two cue sounds was then presented: a high tone, associated with the action trials, and a low tone, associated with fixation trials. After 0.8 s the lower sector of the box was illuminated and one of the three objects became visible. Then, after a variable time lag (0.8–1.2 s), the sound ceased (go/no-go signal) and the monkey either reached, grasped, and pulled the object (Action condition) or remained still for 1.2 s (Fixation condition) in order to receive the reward. The sequence of events and temporal constraints of the OTe and OTp were the same as in the monkey VMT, and the monkey had to simply maintain fixation in order to get the reward. (C) Examples of canonical-mirror neurons recorded in all the task contexts. On the left, a schematic view of the experimental paradigm. Each panel shows, from top to bottom, rastergrams and the spike density function. The gap in the rastergrams and histograms is used to indicate that the activity on its left side has been aligned on object presentation (first dashed black vertical line) while that on its right side is aligned on the pulling onset (second dashed black vertical line) of the same trial. The gray shaded areas indicate the time windows used for statistical analysis of neuronal response to object presentation (on the left) and grasping (on the right). Markers: dark green, cue sound onset; light green, cue sound offset (go signal); orange, detachment of the hand from the starting position (reaching onset); red, reward delivery at the end of the trial.
Figure 2(A) Example of a canonical neuron recorded during an additional control experiment in which the object was presented behind a transparent plastic barrier. Note that the response during object presentation in the VMT was abolished with the interposition of the barrier. Only the alignment and the time window related to object presentation are shown. Other conventions as in Figure 1. (B) Time course and intensity of the population activity of canonical-mirror and canonical neurons relative to the preferred (red) and not preferred (blue) target object. For each neuron, the preferred/not preferred object are those triggering the stronger/wicker response during grasping execution. The activity is aligned on the light onset during different tasks and conditions.