| Literature DB >> 35221891 |
Zehao Huang1,2,3, Shimeng Yang1,2,3, Licheng Xue4,5, Hang Yang6, Yating Lv2,3, Jing Zhao1,2.
Abstract
The brain generates predictions about visual word forms to support efficient reading. The "interactive account" suggests that the predictions in visual word processing can be strategic or automatic (non-strategic). Strategic predictions are frequently demonstrated in studies that manipulated task demands, however, few studies have investigated automatic predictions. Orthographic knowledge varies greatly among individuals and it offers a unique opportunity in revealing automatic predictions. The present study grouped the participants by level of orthographic knowledge and recorded EEGs in a non-linguistic color matching task. The visual word-selective N170 response was much stronger to pseudo than to real characters in participants with low orthographic knowledge, but not in those with high orthographic knowledge. Previous work on predictive coding has demonstrated that N170 is a good index for prediction errors, i.e., the mismatches between predictions and visual inputs. The present findings provide unambiguous evidence that automatic predictions modulate the early stage of visual word processing.Entities:
Keywords: EEG; N170; color matching task; orthographic knowledge; predictive coding; visual word processing
Year: 2022 PMID: 35221891 PMCID: PMC8864072 DOI: 10.3389/fnins.2021.809574
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
FIGURE 1(A) Sample real (left) and pseudo (right) characters. (B) The lexical decision task (left panel) and the one-back color matching task (right panel). The lexical decision task was used to group the participants, whereas the color matching task was used to examine the neural responses associated with automatic predictions.
FIGURE 2(A) The topographic maps evoked by real and pseudo characters, 150, 170, and 190 ms following stimulus presentation. (B) The time windows for P1 and N170 was selected with the GFP method, see text for details.
FIGURE 3(A) The rate in reporting a stimulus a as real character in the high and low orthographic knowledge groups in the lexical decision task. (B) Same as (A) but showing data points from individual participant. ***p < 0.001.
Hit rate (%) and reaction time (ms) to targets in the color-matching task.
| Hit rate (%) | RT (ms) | |||
| Orthographic knowledge | Real | Pseudo | Real | Pseudo |
| High | 94 (2.12) | 95 (1.32) | 506 (20.31) | 499 (18.44) |
| Low | 92 (2.39) | 93 (1.90) | 555 (20.73) | 557 (19.58) |
Numbers in the parentheses are standard errors of the mean.
FIGURE 4(A) The ERP waveforms in response to real and pseudo characters in the P7 channel. (B) Bar plots comparing the N170 amplitudes for pseudo and real characters in the high and low orthographic knowledge groups in P7 channel. ***p < 0.001.
The P1 and N170 latency (ms) for real and pseudo characters.
| P1 latency (ms) | N170 latency (ms) | |||
| Orthographic knowledge | Real | Pseudo | Real | Pseudo |
| High | 81 (4.85) | 94 (3.93) | 173 (4.35) | 177 (3.69) |
| Low | 88 (4.34) | 101 (3.36) | 174 (2.21) | 173 (2.43) |
Standard errors of the means are given in the parentheses.