| Literature DB >> 23472216 |
Emiliano Ricciardi1, Giacomo Handjaras, Daniela Bonino, Tomaso Vecchi, Luciano Fadiga, Pietro Pietrini.
Abstract
The representation of actions within the action-observation network is thought to rely on a distributed functional organization. Furthermore, recent findings indicate that the action-observation network encodes not merely the observed motor act, but rather a representation that is independent from a specific sensory modality or sensory experience. In the present study, we wished to determine to what extent this distributed and 'more abstract' representation of action is truly supramodal, i.e. shares a common coding across sensory modalities. To this aim, a pattern recognition approach was employed to analyze neural responses in sighted and congenitally blind subjects during visual and/or auditory presentation of hand-made actions. Multivoxel pattern analyses-based classifiers discriminated action from non-action stimuli across sensory conditions (visual and auditory) and experimental groups (blind and sighted). Moreover, these classifiers labeled as 'action' the pattern of neural responses evoked during actual motor execution. Interestingly, discriminative information for the action/non action classification was located in a bilateral, but left-prevalent, network that strongly overlaps with brain regions known to form the action-observation network and the human mirror system. The ability to identify action features with a multivoxel pattern analyses-based classifier in both sighted and blind individuals and independently from the sensory modality conveying the stimuli clearly supports the hypothesis of a supramodal, distributed functional representation of actions, mainly within the action-observation network.Entities:
Mesh:
Year: 2013 PMID: 23472216 PMCID: PMC3589380 DOI: 10.1371/journal.pone.0058632
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Accuracy of each SVM classifiers in a within- and across-experimental condition evaluation.
| Sighted | Blind | |||
| SVM classifier trainedon visual stimuli | SVM classifier trained on auditory stimuli | SVM classifier trained on auditory stimuli | ||
|
| ||||
| Sighted | Visual | 80.7% | n.s. | n.s. |
| Auditory | n.s. | 75.7% | n.s. | |
| Blind | Auditory | n.s. | n.s. | 76.2% |
|
| ||||
| Sighted | Visual | 77.1% | n.s. | n.s. |
| Auditory | n.s. | 74.3% | n.s. | |
| Blind | Auditory | n.s. | n.s. | 73.7% |
|
| ||||
| Sighted | Visual | 73.6% | 61.1% | n.s. |
| Auditory | 59.6% | 67.1% | 57.7% | |
| Blind | Auditory | 60.3% | 61.5% | 70.0% |
p-value < 0.005,
p-value < 0.01,
p-value < 0.05 at permutation test.
Brain regions obtained with a “knock out” procedure to examine the degree of overlap in information between the representations of different experimental conditions/groups.
| Brain areas | Coordinates | ||||
| Hem | BA | x | y | z | |
| Superior Frontal | R | 10 | 5 | 63 | −2 |
| L | 6 | −7 | −1 | 64 | |
| Middle Frontal | R | 6 | 25 | −3 | 58 |
| Inferior Frontal | R | 44 | 45 | 9 | 28 |
| Anterior Cingulate | R | 24 | 5 | 17 | 20 |
| Postcentral | L | 3 | −27 | −33 | 42 |
| L | 3 | −35 | −31 | 52 | |
| Superior Parietal | L | 7 | −25 | −65 | 62 |
| Inferior Parietal | R | 40 | 55 | −43 | 28 |
| L | 40 | −43 | −33 | 46 | |
| Superior Temporal | R | 38 | 41 | 7 | −18 |
| L | 22 | −45 | −9 | −6 | |
| Middle Temporal | R | 21 | 55 | −35 | 4 |
| Fusiform | R | 19 | 38 | −69 | −16 |
| L | 37 | −45 | −61 | −18 | |
| Parahippocampal | L | 28 | −28 | −6 | −20 |
| Cuneus | R | 17 | 9 | −87 | 6 |
| Middle Occipital | L | 19 | −39 | −73 | 8 |
Figure 1Discriminative maps of the three distinct linear binary SVM classifiers to separate action (red scale) from non-action (blue scale) stimuli in sighted (sounds and videos) and blind (sounds only) subjects, as obtained by using a RFE algorithm.
Color intensity reflects the weights of the support vectors, after transformation into Z scores. Spatially normalized volumes are projected a single-subject inflated pial surface template in the Talairach-Tournoux standard space. Ventral and dorsal areas of the premotor cortex (vPM e dPM), inferior frontal (IF) cortex, superior and middle temporal gyri (ST/MT), superior (SPL) and inferior parietal lobule (IPL).
Figure 2Map of the combined ‘supramodal’ SVM classifier that was defined by using the training data from all action (red scale) and non-action (blue scale) stimuli classes, and was employed in a ‘knock-out’ procedure.
Spatially normalized volumes are projected onto a single-subject inflated pial surface template in the Talairach-Tournoux standard space. Ventral and dorsal areas of the premotor cortex (vPM e dPM), inferior frontal (IF) cortex, superior and middle temporal gyri (ST/MT), superior (SPL) and inferior parietal lobule (IPL).
Accuracy of each SVM classifiers in recognizing motor pantomime as ‘action’.
| Sighted | Blind | ||
| SVM classifier trained on visual stimuli | SVM classifier trained on auditory stimuli | SVM classifier trained on auditory stimuli | |
|
| |||
| Sighted | 85.3% | 73.9% | 53.2% |
| Blind | 60% | 65% | 61.2% |
|
| |||
| 84.3% | 71.7% | 46.4% | |
| 57.5% | 63.7% | 65% | |
|
| |||
| Sighted | 85.3% | 76.8% | 67.5% |
| Blind | 70% | 52.5% | 61.5% |
visual and auditory runs have been considered together.
p-value < 0.005,
p-value < 0.05 at permutation test;
p-value < 0.05 at the binomial test.