| Literature DB >> 18997869 |
Joset A Etzel1, Valeria Gazzola, Christian Keysers.
Abstract
The discovery of mirror neurons has suggested a potential neural basis for simulation and common coding theories of action perception, theories which propose that we understand other people's actions because perceiving their actions activates some of our neurons in much the same way as when we perform the actions. We propose testing this model directly in humans with functional magnetic resonance imaging (fMRI) by means of cross-modal classification. Cross-modal classification evaluates whether a classifier that has learned to separate stimuli in the sensory domain can also separate the stimuli in the motor domain. Successful classification provides support for simulation theories because it means that the fMRI signal, and presumably brain activity, is similar when perceiving and performing actions. In this paper we demonstrate the feasibility of the technique by showing that classifiers which have learned to discriminate whether a participant heard a hand or a mouth action, based on the activity patterns in the premotor cortex, can also determine, without additional training, whether the participant executed a hand or mouth action. This provides direct evidence that, while perceiving others' actions, (1) the pattern of activity in premotor voxels with sensory properties is a significant source of information regarding the nature of these actions, and (2) that this information shares a common code with motor execution.Entities:
Mesh:
Year: 2008 PMID: 18997869 PMCID: PMC2577733 DOI: 10.1371/journal.pone.0003690
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Naming scheme and number of 4×4×4 mm voxels in each ROI and brain area.
| ROI or Area | Anatomy Toolbox areas | side | abbreviation | number of voxels | |
| total | without somatotopic | ||||
| premotor cortex | BA 44, BA 6 | left | preM L | 396 | 386 |
| right | preM R | 385 | 376 | ||
| auditory cortex | TE 1.0, TE 1.1, TE 1.2 | left | aud L | 55 | |
| right | aud R | 61 | |||
| secondary somatosensory cortex | OP1, OP2, OP3, OP4 | left | S2 L | 159 | |
| right | S2 R | 162 | |||
| primary motor cortex | BA 4a, BA 4p | left | M1 L | 183 | |
| right | M1 R | 147 | |||
| primary somatosensory cortex | BA 1, BA 2, BA 3a, BA 3b | left | S1 L | 262 | |
| right | S1 R | 351 | |||
|
| CM/LB/SF; BA 17, BA 18, hOC5 | left |
| 391 | |
| right |
| 348 | |||
The given voxel counts are the number of voxels used in the analyses (the number that remain after removing all voxels with zero variance across volumes in any subject), both in each ROI and after removing somatotopic voxels; see text for details. The “Anatomy Toolbox areas” column lists the regions selected to make up each ROI or area using the names in the probabilistic cytoarchitectonic maps from the SPM Anatomy Toolbox [30]. See Figure S1 and Figure S2 for an illustration of these ROIs.
Mean cross-modal classification accuracy and p-values of each ROI as determined by permutation and t-testing, both of the entire ROI and after removing the voxels identified as somatotopic; see text for details.
| ROI | all voxels | without somatotopic voxels | ||||||
| mean | s.e.m. | perm | t-test | mean | s.e.m. | perm | t-test | |
| preM L | 0.5449 | 0.0232 | 0.005 | 0.0358 | 0.543 | 0.023 | 0.0040 | 0.0406 |
| preM R | 0.5664 | 0.0197 | 0.001 | 0.0021 | 0.5586 | 0.0215 | 0.0030 | 0.0078 |
| M1 L | 0.4727 | 0.0299 | 0.9441 | 0.8125 | ||||
| M1 R | 0.4863 | 0.0331 | 0.7343 | 0.6571 | ||||
| S1 L | 0.5352 | 0.0224 | 0.043 | 0.069 | ||||
| S1 R | 0.5059 | 0.0225 | 0.3866 | 0.3991 | ||||
| S2 L | 0.5176 | 0.0213 | 0.1788 | 0.2115 | ||||
| S2 R | 0.5156 | 0.0278 | 0.2517 | 0.2912 | ||||
| aud L | 0.5508 | 0.0238 | 0.014 | 0.0251 | ||||
| aud R | 0.5312 | 0.0278 | 0.0679 | 0.1394 | ||||
p<0.0050 (Bonferroni correction of 0.05 for 10 ROIs).
Mean uni-modal (train and test on listening data) classification accuracy.
| ROI | mean | s.e.m. | permutation | t-test |
| preM L | 0.5729 | 0.0478 | 0.002 | 0.0739 |
| preM R | 0.5729 | 0.0382 | 0.002 | 0.0378 |
| M1 L | 0.5521 | 0.0364 | 0.0176 | 0.0864 |
| M1 R | 0.5069 | 0.0408 | 0.1037 | 0.4336 |
| S1 L | 0.6181 | 0.0464 | 0.002 | 0.0113 |
| S1 R | 0.5174 | 0.0453 | 0.0607 | 0.3534 |
| S2 L | 0.5868 | 0.0336 | 0.002 | 0.0104 |
| S2 R | 0.625 | 0.0435 | 0.002 | 0.0058 |
| aud L | 0.6389 | 0.0296 | 0.002 | 0.0001 |
| aud R | 0.5938 | 0.0294 | 0.002 | 0.0031 |
p<0.0050 (Bonferroni correction of 0.05 for 10 ROIs).
Mean uni-modal (train and test on listening data) and cross-modal (train on listening, test on execution) classification accuracy and p-values of the other areas.
| area | analysis | mean | s.e.m. | permutation | t-test |
|
| uni-modal | 0.5382 | 0.0426 | 0.0254 | 0.1923 |
| cross-modal | 0.5332 | 0.0201 | 0.0549 | 0.0594 | |
|
| uni-modal | 0.4618 | 0.0242 | 0.3366 | 0.9325 |
| cross-modal | 0.5332 | 0.0227 | 0.0559 | 0.0823 |