| Literature DB >> 29492452 |
Dan Nemrodov1, Matthias Niemeier1, Ashutosh Patel1, Adrian Nestor1.
Abstract
Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here, we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50-650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.Entities:
Keywords: ERP; N170; face space; image reconstruction; pattern analysis; spatiotemporal dynamics
Mesh:
Year: 2018 PMID: 29492452 PMCID: PMC5829556 DOI: 10.1523/ENEURO.0358-17.2018
Source DB: PubMed Journal: eNeuro ISSN: 2373-2822
Figure 1.Grand-averaged ERPs across () left hemisphere electrodes (P5, P7, P9, PO3, PO7, and O1) and () right hemisphere electrodes (P6, P8, P10, PO4, PO8, and O2) for 54 facial identities (averaged across expressions). Head maps show voltage distributions at () N170 () P1, N250.
Statistical table
| Analysis number | Figure | Description | Data structure | Type of test | Effect | Power/CI | |
|---|---|---|---|---|---|---|---|
| a | P1 component | Assumed normal | Repeated measures ANOVA | Hemisphere | 0.117 | 0.343 | |
| b | P1 component | Assumed normal | Repeated measures ANOVA | Identity | 0.39 | 0.447 | |
| c | P1 component | Assumed normal | Repeated measures ANOVA | Identity X hemisphere | 0.551 | 0.311 | |
| d | N170 component | Assumed normal | Repeated measures ANOVA | Hemisphere | 0.146 | 0.299 | |
| e | N170 component | Assumed normal | Repeated measures ANOVA | Identity | 0.513 | 0.373 | |
| f | N170 component | Assumed normal | Repeated measures ANOVA | Identity X hemisphere | 0.307 | 0.532 | |
| g | N250 component | Assumed normal | Repeated measures ANOVA | Hemisphere | 0.171 | 0.269 | |
| h | N250 component | Assumed normal | Repeated measures ANOVA | Identity | 0.001 | 0.980 | |
| i | N250 component | Assumed normal | Repeated measures ANOVA | Identity X hemisphere | 0.07 | 0.560 | |
| j | N250 component | Assumed normal | Repeated measures ANOVA | Identity in LH | 0.009 | 0.926 | |
| k | N250 component | Assumed normal | Repeated measures ANOVA | Identity in RH | 0.004 | 0.943 | |
| l | Group-based discrimination | Normality not assumed | Permutation test | Across expression | FDR-corrected | ||
| m | Representative participant discrimination | Normality not assumed | Permutation test | Across-expression | FDR-corrected | ||
| n | Group-based discrimination | Normality not assumed | Permutation test | Across expression/neutral | 0.001 | 95% CI: 47.1–51.3 | |
| o | Group-based discrimination | Normality not assumed | Permutation test | Across expression/happy | 0.001 | 95% CI: 47.2–50.9 | |
| p | Group-based discrimination | Normality not assumed | Permutation test | Within expression/neutral | 0.001 | 95% CI: 45.9–51.8 | |
| q | Group-based discrimination | Normality not assumed | Permutation test | Within expression/happy | 0.001 | 95% CI: 45.9–52.1 | |
| r | Single-participant-based discrimination | Assumed normal | Two-tailed | Across expression/neutral | 0.001 | 95% CI: 53.1–57 | |
| s | Single-participant-based discrimination | Assumed normal | Two-tailed | Across expression/happy | 0.001 | 95% CI: 53.2–57.2 | |
| t | Single-participant-based discrimination | Assumed normal | Two-tailed | Within expression/neutral | 0.001 | 95% CI: 0.559–0.602 | |
| u | Single-participant-based discrimination | Assumed normal | Two-tailed | Within expression/`happy | 0.001 | 95% CI: 54.8–60.4 | |
| v | n/a | Single-participant-based discrimination | Assumed normal | Repeated measures ANOVA | Discrimination type | <0.001 | >0.999 |
| w | n/a | Single-participant-based discrimination | Assumed normal | Repeated measures ANOVA | Expression | 0.466 | 0.107 |
| x | n/a | Single-participant-based discrimination | Assumed normal | Repeated measures ANOVA | Discrimination type X expression | 0.211 | 0.230 |
| y | n/a | Single-participant-based discrimination within and across databases | Assumed normal | Repeated measures ANOVA | Discrimination type | <0.001 | >0.999 |
| z | n/a | Single-participant-based discrimination within and across databases | Assumed normal | Repeated measures ANOVA | Pairs type | <0.001 | >0.999 |
| aa | n/a | Single-participant-based discrimination within and across databases | Assumed normal | Repeated measures ANOVA | Discrimination type X pairs type | 0.033 | 0.565 |
| ab | n/a | Single-participant-based discrimination within and across databases | Assumed normal | Across-within database for within-expression versus across-within database for across-expression | 0.412 | 95% CI: –0.1–1.5 | |
| ac | n/a | Single-participant-based discrimination within and across databases | Assumed normal | Two-tailed | Within-expression, within-database discrimination | <0.001 | 95% CI: 54.6– |
| ad | n/a | Single-participant-based discrimination within and across databases | Assumed normal | Two-tailed | Within-expression, across-database discrimination | <0.001 | 95% CI: 56.4–61 |
| ae | n/a | Single-participant-based discrimination within and across databases | Assumed normal | Two-tailed | Across-expression, within-database discrimination | <0.001 | 95% CI: 52.8–55 |
| af | n/a | Single-participant-based discrimination within and across databases | Assumed normal | Two-tailed | Across-expression, across-database discrimination | <0.001 | 95% CI: 54–57.7 |
| ag | n/a | temporally cumulative analysis | Assumed normal | Pearson’s correlation | EEG-based discrimination/behavioral discrimination | <0.001 | 0.436 |
| ah | 4 | Temporal correlation | Assumed normal | Pearson’s correlation | Across-expression discrimination/behavioral discrimination | FDR-corrected | |
| ai | n/a | Temporally cumulative analysis | Assumed normal | Pearson’s correlation | Across-expression discrimination/CFMT | 0.219 | 0.237 |
| aj | n/a | Temporally cumulative analysis | Assumed normal | Pearson’s correlation | Across-expression discrimination/VVIQ-2 | 0.676 | 0.05 |
| ak | Face space fit | Normality not assumed | Permutation test | Happy-neutral similarity | <0.001 | 95% CI: 0.738–0.798 | |
| al | CIM | Normality not assumed | Permutation test | Neutral dimension 1; luminance | FDR-corrected | ||
| am | CIM | Normality not assumed | Permutation test | Neutral dimension 1; red-green | FDR-corrected | ||
| an | CIM | Normality not assumed | Permutation test | Neutral dimension 1; yellow-blue | FDR-corrected | ||
| ao | CIM | Normality not assumed | Permutation test | Neutral dimension 2; luminance | FDR-corrected | ||
| ap | CIM | Normality not assumed | Permutation test | Neutral dimension 2; red-green | N/A | ||
| aq | CIM | Normality not assumed | Permutation test | Neutral dimension 2; yellow-blue | FDR-corrected | ||
| ar | CIM | Normality not assumed | Permutation test | Happy dimension 1; luminance | FDR-corrected | ||
| as | CIM | Normality not assumed | Permutation test | Happy dimension 1; red-green | FDR-corrected | ||
| at | CIM | Normality not assumed | Permutation test | Happy dimension 1; yellow-blue | FDR-corrected | ||
| au | CIM | Normality not assumed | Permutation test | Happy dimension 2; luminance | N/A | ||
| av | CIM | Normality not assumed | Permutation test | Happy dimension 2; red-green | FDR-corrected | ||
| aw | CIM | Normality not assumed | Permutation test | Happy dimension 2; yellow-blue | FDR-corrected | ||
| ax | Temporal reconstruction accuracy | Normality not assumed | Permutation test | Neutral | FDR-corrected | ||
| ay | Temporal reconstruction accuracy | Normality not assumed | Permutation test | Happy | FDR-corrected | ||
| bz | Reconstruction accuracy (image-based) | Normality not assumed | Permutation test | Neutral | 0.001 | 95% CI: 42.3–57.8 | |
| ba | Reconstruction accuracy (image-based) | Normality not assumed | Permutation test | Happy | 0.001 | 95% CI: 42.1–58.1 | |
| bb | Reconstruction accuracy (experimental-based) | Assumed normal | Two-tailed | Neutral | 0.001 | 95% CI: 55.6–62.6 | |
| bc | Reconstruction accuracy (experimental-based) | Assumed normal | Two-tailed | Happy | 0.001 | 95% CI: 52.4–59.2 | |
| bd | n/a | Correlation between experimental and image-based accuracies | Assumed normal | Pearson’s correlation | Neutral | 0.001 | 0.912 |
| be | n/a | Correlation between experimental and image-based accuracies | Assumed normal | Pearson’s correlation | Happy | 0.002 | 0.898 |
| bf | n/a | Correlation between reconstruction and discrimination | Assumed normal | Pearson’s correlation | Averaged across expressions | <0.001 | >0.999 |
| bg | n/a | Single-participant-based reconstruction accuracy (image-based) | Assumed normal | Two-tailed | Neutral | 0.027 | 0.676 |
| bh | n/a | Single-participant-based reconstruction accuracy (image-based) | Assumed normal | Two-tailed | Happy | 0.045 | 0.580 |
| bi | n/a | Correlation between single-participant-based reconstruction and discrimination | Assumed normal | Pearson’s correlation | Neutral | <0.001 | 0.974 |
| bj | n/a | Correlation between single-participant-based reconstruction and discrimination | Assumed normal | Pearson’s correlation | Happy | <0.001 | 0.980 |
Figure 2.The time course of EEG-based classification accuracy for across- and within-expression discrimination of facial identity. , Classification was conducted across consecutive 10-ms window patterns over 12 occipitotemporal electrodes for group-based ERP data. Both types of analysis exhibited above-chance discrimination across extensive temporal intervals (permutation test; FDR correction across time, q < 0.01); shaded areas mark intervals of better-than-chance discrimination for across-expression classification. , The time course of EEG-based classification accuracy for across- and within-expression discrimination of facial identity for a single representative participant. Classification was conducted across consecutive 10-ms window patterns over 12 occipitotemporal electrodes. Both types of analysis exhibited above-chance discrimination across extensive temporal intervals (permutation test; FDR correction across time, q < 0.01); shaded areas mark intervals of better-than-chance discrimination for across-expression classification.
Figure 3.EEG-based classification accuracy for across- and within-expression discrimination of facial identity with temporally cumulative data (50–650 ms after stimulus onset). Accuracy corresponding to neutral and happy faces are separately shown for () group-based ERP data and () single-participant data (i.e., pattern classification was conducted individually for each participant and, then, its results averaged across participants). The plots display () the results of permutation tests (red solid and dash lines indicate average accuracy and 99% confidence intervals estimated with 103 permutations) and () the distribution of single-participant data (green and purple solid lines indicate medians, boxes represent 1st and 3rd quartiles, whiskers represent minimum and maximum accuracy values, points represent individual participants’ values and red solid lines indicate chance-level discrimination).
Figure 4.Correlation of EEG- and behavioral-based estimates of pairwise face similarity. EEG-based estimates are derived from across-expression discrimination of facial identity for consecutive 10-ms windows of group-based data. Multiple intervals, marked by shaded areas, exhibit significant levels of correlation (permutation test; FDR correction across time, q < 0.01).
Figure 5.Neutral and happy face space estimates along with their fit (after Procrustes alignment). Estimates were derived through MDS analysis of similarity matrices based on within-expression face discrimination of group-based temporally cumulative data. The two face space estimates exhibit a similar topography as found with () their visualization across multiple dimensions (red and green circles indicate neutral and happy faces, respectively; solid lines connect face images with the same identity with the thickness of the line proportionally reflecting shorter distances; the first four dimensions shown here account for 40% and 41% variance for neutral and happy face space); () badness of fit (SSEs) for the two spaces compared to their permutation-based counterpart (average fits and 95% confidence intervals estimated with 103 permutations).
Figure 6.Examples of CIMs extracted from EEG-based face space constructs for () neutral and () happy faces. Pairs of images show raw CIMs (odd columns) and their analysis (even columns) with a pixelwise permutation-based test (FDR-corrected across pixels; q < 0.05). Bright/dark, red/green, and yellow/blue regions in analyzed CIs mark areas of the face brighter (L*), redder (a*), or more yellow (b*) than chance in CIEL*a*b*. Results are shown separately for the first and fourth dimensions of face spaces derived from group-based temporally cumulative data.
Figure 7.Reconstruction results for neutral and happy face images across consecutive 10-ms windows of group-based data. , Examples of face stimuli along with their corresponding reconstructions at two different times (numbers in the upper left corner indicate image-based estimates of reconstruction accuracy). , Time course of reconstruction accuracy. Both neutral and happy face images exhibit above-chance discrimination across multiple temporal intervals (permutation test; FDR correction across time, q < 0.05; shaded areas mark intervals of better-than-chance discrimination for neutral faces). Reconstruction accuracy is maximized in the vicinity of the N170 component (Fig. 1) and of the discrimination peak found with pattern classification (Fig. 2).
Figure 8.Reconstruction results for neutral and happy faces relying on temporally cumulative group-based data. () Examples of face stimuli along with their corresponding reconstructions (numbers in the upper left corner indicate image-based estimates of reconstruction accuracy; numbers in the upper right indicate experimental-based accuracy). , Average image-based reconstruction accuracy (red solid and dash lines indicate average accuracy and 95% confidence intervals estimated with 103 permutations). , Average experimental-based reconstruction accuracy (green and purple solid lines indicate medians, boxes represent 1st and 3rd quartiles, whiskers represent minimum and maximum accuracy values, points represent individual participants’ values and red solid lines indicate chance-level reconstruction).