| Literature DB >> 31176833 |
Philipp Ludersdorfer1, Cathy J Price2, Keith J Kawabata Duncan3, Kristina DeDuck4, Nicholas H Neufeld5, Mohamed L Seghier6.
Abstract
During word and object recognition, extensive activation has consistently been observed in the left ventral occipito-temporal cortex (vOT), focused around the occipito-temporal sulcus (OTs). Previous studies have shown that there is a hierarchy of responses from posterior to anterior vOT regions (along the y-axis) that corresponds with increasing levels of recognition - from perceptual to semantic processing, respectively. In contrast, the functional differences between superior and inferior vOT responses (i.e. along the z-axis) have not yet been elucidated. To investigate, we conducted an extensive review of the literature and found that peak activation for reading varies by more than 1 cm in the z-axis. In addition, we investigated functional differences between superior and inferior parts of left vOT by analysing functional MRI data from 58 neurologically normal skilled readers performing 8 different visual processing tasks. We found that group activation in superior vOT was significantly more sensitive than inferior vOT to the type of task, with more superior vOT activation when participants were matching visual stimuli for their semantic or perceptual content than producing speech to the same stimuli. This functional difference along the z-axis was compared to existing boundaries between cytoarchitectonic areas around the OTs. In addition, using dynamic causal modelling, we show that connectivity from superior vOT to anterior vOT increased with semantic content during matching tasks but not during speaking tasks whereas connectivity from inferior vOT to anterior vOT was sensitive to semantic content for matching and speaking tasks. The finding of a functional dissociation between superior and inferior parts of vOT has implications for predicting deficits and response to rehabilitation for patients with partial damage to vOT following stroke or neurosurgery.Entities:
Keywords: Connectivity; Fusiform gyrus; Occipito-temporal sulcus; Reading and object recognition; fMRI
Mesh:
Year: 2019 PMID: 31176833 PMCID: PMC6693527 DOI: 10.1016/j.neuroimage.2019.06.003
Source DB: PubMed Journal: Neuroimage ISSN: 1053-8119 Impact factor: 6.556
Previous studies reporting left vOT activation during reading: A MEDLINE search was conducted (from January 2000 to October 2018) using the keywords (i) ‘Reading’, (ii) ‘fMRI’ or ‘magnetic resonance imaging’ and (iii) ‘occipitotemporal’, ‘occipito-temporal’, or ‘visual word form area’ to identify papers that had reported activation during reading in left vOT. Relevant references within these articles also directed us to other papers that were considered in the literature review. Altogether, we identified 213 articles. We then excluded: (i) reviews and meta-analyses (i.e. those not reporting original-research), (ii) effects from subjects who were not neurologically or psychiatrically “normal” adults, or who had atypical learning, (iii) effects that were not related to visually presented words or pseudowords, (iv) effects not reported in standardized coordinates, (v) results of contrasts that compared visual stimuli to rest or fixation (because it was impossible to determine the level of cognitive processing that was driving activation), (vi) single case studies, (vii) co-ordinates related to laterality indices, (viii) effects in predefined regions of interest (region-based analyses), and (ix) studies published in non-English journals. Where appropriate, stereotactic Talairach coordinates were converted into Montreal Neurological Institute (MNI) space. For each study, we reported the location of the left vOT activation peak. The median of all vOT peaks is [x = −43 mm, y = −58 mm, z = −14.5 mm]. Activation contrasts were categorised as being related to: (1) changes in task demands where subjects performed different tasks with the same set of stimuli or (2) changes in stimulus demands where subjects performed the same task with different sets of stimuli. Task driven contrasts were further categorised into those primarily driven by visual (e.g. letter detection versus phoneme detection), semantic (e.g. semantic versus identity one-back matching), or general demands (e.g. one-back matching versus passive viewing). Stimulus driven contrasts were further categorised into those primarily driven by visual differences (e.g. written words versus pictures of objects), linguistic content (e.g. words versus false fonts), a combination of visual differences and linguistic content (e.g. words versus checkerboards), semantic content (e.g. high versus low imageable words), general demands (e.g. unfamiliar versus familiar words), or stimulus primes (i.e. less activation when stimuli were preceded by identical ones). In some papers, superior peaks at z ≥ −12mm were labelled as inferior occipital gyrus instead of vOT.
| Study | MNI coordinates | Factor driving activation | ||
|---|---|---|---|---|
| x | y | z | ||
| −39 | −57 | −9 | Stimuli: visual/linguistic content | |
| −42 | −55 | −10 | Stimuli: general demands | |
| −43 | −56 | −10 | Task: semantic demands | |
| −40 | −56 | −10 | Stimuli & task: visual/linguistic | |
| −46 | −56 | −11 | Stimuli: general demands | |
| −42 | −53 | −12 | Task: general demands | |
| −44 | −54 | −12 | Stimuli: general demands | |
| −45 | −57 | −12 | Stimuli: primes | |
| −52 | −49 | −13 | Stimuli: visual content | |
| −36 | −48 | −14 | Stimuli: general demands | |
| −48 | −54 | −14 | Stimuli: visual/linguistic content | |
| −40 | −56 | −14 | Stimuli: visual/linguistic content | |
| −42 | −60 | −8 | Stimuli: general demands | |
| −44 | −64 | −8 | Stimuli: primes | |
| −43 | −66 | −9 | Stimuli: linguistic content | |
| −43 | −70 | −9 | Task: visual demands | |
| −44 | −60 | −10 | Stimuli: general demands | |
| −44 | −62 | −10 | Stimuli: general demands | |
| −46 | −62 | −10 | Stimuli: general demands | |
| −42 | −70 | −10 | Stimuli: linguistic content | |
| −40 | −62 | −10 | Stimuli: visual/linguistic content | |
| −45 | −58 | −11 | Stimuli: general demands | |
| −41 | −60 | −12 | Stimuli & task: visual/linguistic | |
| −40 | −66 | −12 | Stimuli: visual/linguistic content | |
| −48 | −58 | −14 | Stimuli: general demands | |
| −48 | −58 | −14 | Task: semantic demands | |
| −44 | −52 | −15 | Stimuli: general demands | |
| −44 | −55 | −15 | Stimuli: visual content | |
| −42 | −57 | −15 | Stimuli: visual/linguistic content | |
| −45 | −50 | −16 | Stimuli: semantic content | |
| −40 | −54 | −16 | Stimuli: linguistic content | |
| −40 | −56 | −16 | Stimuli: visual/linguistic content | |
| −42 | −52 | −17 | Stimuli: general demands | |
| −42 | −50 | −18 | Stimuli: general demands | |
| −42 | −54 | −18 | Stimuli: linguistic content | |
| −39 | −46 | −20 | Stimuli: visual/linguistic content | |
| −44 | −52 | −20 | Stimuli: primes | |
| −46 | −52 | −20 | Stimuli: linguistic content | |
| −43 | −60 | −15 | Task: semantic demands | |
| −48 | −60 | −15 | Stimuli: general demands | |
| −42 | −63 | −15 | Stimuli: visual/linguistic content | |
| −39 | −66 | −15 | Stimuli: primes | |
| −41 | −58 | −16 | Stimuli: linguistic content | |
| −40 | −58 | −16 | Stimuli: primes | |
| −44 | −64 | −16 | Task: visual demands | |
| −44 | −64 | −16 | Stimuli: general demands | |
| −48 | −64 | −16 | Stimuli: semantic content | |
| −45 | −58 | −17 | Stimuli: general demands | |
| −42 | −60 | −18 | Stimuli: visual/linguistic content | |
| −42 | −60 | −18 | Stimuli: linguistic content | |
| −48 | −60 | −18 | Stimuli: general demands | |
| −40 | −64 | −18 | Stimuli: visual/linguistic content | |
Fig. 1Literature review. This schematic figure illustrates the wide spatial variability of vOT localisation across functional imaging studies of reading. Each vOT peak represents the results of one of the selected 52 studies listed in Table 1: vOT coordinates above or below the median z = −15mm are shown in red (‘x’) or blue (‘o’) respectively. The y-axis (z = 0 mm) and the z-axis (y = 0 mm) are shown as grey lines, with their intersection at the AC point in black. The background of this figure is a sagittal view of the SPM's tissue probability map of CSF at x = −44mm.
Fig. 2Experimental design. Our experimental paradigm manipulated the factors “stimulus type” (letter strings versus pictures), “familiarity” (familiar words and objects versus unfamiliar Greek letter strings and nonobjects) and “task” (matching versus speaking). In all trials, three stimuli were simultaneously presented as a “triad”, with one stimulus above and two stimuli below. In the matching tasks, participants made a finger press response to make: semantic matching decisions on words and objects (i.e. matching ‘Piano’ to ‘Harp’ rather than ‘Oven’) and perceptual matching decision on the unfamiliar stimuli (based on physical identity). In the speaking tasks, participants read/named aloud familiar words and objects and say “1,2,3” in response to seeing the unfamiliar stimuli.
Fig. 3Activation findings. (A) Activation for matching versus speaking tasks (green), familiar versus unfamiliar stimuli (yellow), and all tasks versus fixation baseline (red), projected on a sagittal view (at x = −44mm) of the Anatomy Toolbox's cytoarchitectonic maps in MNI space. (B) Brain activation (effect size) for all eight task conditions at the peak voxels for: the task effect in superior vOT [-46, −58, −10]; the familiarity effect in anterior vOT [-44, −50,-16], and all conditions compared to fixation in inferior vOT [-44,-60,-18] and posterior fusiform [-40, −74, −14]. The bars report average activation estimates with error bars indicating 90% confidence intervals. Abbreviations: W = words, O = objects, L = (Greek) letters, N = nonobjects, FG2 and FG4 = cytoarchitectonic areas of the Anatomy Toolbox, ‘+’ = location of the peaks of interest used in the plots below.
In-scanner accuracy and response times
Mean accuracy (and standard deviation) are reported for all 8 tasks. Response times were only available for the matching tasks (post-decision finger press speed) but not for the speaking tasks due to difficulties extracting voice onset from the noise of the scanner.
| Condition | Accuracy (%) | Response times in seconds | |||||||
|---|---|---|---|---|---|---|---|---|---|
| Matching | Objects | 93.5 | 75 | 100 | 4.5 | 1.7 | 1.3 | 2.6 | 0.3 |
| Words | 90.3 | 81 | 100 | 5.4 | 1.8 | 1.2 | 2.3 | 0.3 | |
| Non-objects | 98 | 81 | 100 | 3.4 | 1.1 | 0.7 | 1.8 | 0.2 | |
| Greek letters | 99.1 | 88 | 100 | 3.0 | 1.1 | 0.7 | 1.7 | 0.2 | |
| Speaking | Objects | 99.8 | 83 | 100 | 0.9 | not available | |||
| Words | 96.2 | 94 | 100 | 3.8 | |||||
| Say “1,2,3” | Nonobjects | 100 | 100 | 100 | 0 | ||||
| Greek letters | 100 | 100 | 100 | 0 | |||||
Connection strengths (in Hz): Strength of endogenous and modulatory connections during matching (A) and speaking (B) tasks. Abbreviations: Pos = input region in posterior FG2, Sup = superior (middle) vOT (posterior-superior FG4), Inf = inferior (middle) vOT (anterior-inferior FG2), Ant = anterior vOT (FG4).
| A. Matching tasks | |||||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Connection | Endogenous | Semantic > perceptual matching | Objects > Words | ||||||||||
| From | To | M | SD | M | SD | M | SD | ||||||
| Pos | Sup | 0.21 | 0.02 | 12.83 | <.001* | 0.27 | 0.03 | 8.45 | <.001* | 0.03 | 0.03 | 0.86 | >.2 |
| Inf | 0.27 | 0.02 | 12.56 | <.001* | 0.34 | 0.04 | 8.43 | <.001* | 0.06 | 0.03 | 1.96 | = .056 | |
| Sup | Pos | 0.02 | 0.04 | 0.30 | >.2 | −0.08 | 0.04 | −2.04 | = .047* | 0.04 | 0.04 | 0.75 | >.2 |
| Inf | −0.13 | 0.02 | −5.83 | <.001* | −0.10 | 0.04 | −2.65 | = .011* | −0.12 | 0.04 | −3.47 | = .001* | |
| Ant | 0.04 | 0.02 | 1.31 | = .197 | < | −0.02 | 0.02 | −1.01 | >.2 | ||||
| Inf | Pos | 0.09 | 0.04 | 2.41 | = .020* | −0.13 | 0.02 | −5.46 | <.001* | 0.09 | 0.03 | 2.68 | = .010* |
| Sup | −0.18 | 0.01 | −13.95 | <.001* | −0.06 | 0.04 | −1.81 | = .076 | −0.05 | 0.03 | −1.66 | = .104 | |
| Ant | 0.09 | 0.02 | 4.93 | <.001* | = | 0.01 | 0.02 | 0.32 | >.2 | ||||
| Ant | Sup | 0.09 | 0.02 | 5.31 | <.001* | −0.21 | 0.03 | −6.96 | <.001* | −0.10 | 0.03 | −3.09 | = .003* |
| Inf | 0.00 | 0.04 | −0.15 | >.2 | −0.27 | 0.04 | −7.11 | <.001* | −0.13 | 0.02 | −7.09 | <.001* | |
Fig. 4Connectivity findings. (A) Localisation of regions of interest projected onto a sagittal view of a canonical structural brain image. Additionally, activation for reading versus fixation baseline is shown in white. Abbreviations: Ant = anterior vOT, Sup = superior posterior vOT, Inf = inferior posterior vOT, Pos = posterior input region. (B) Modulatory (words and objects > unfamiliar stimuli) connections between the four regions of interest included in the dynamic causal modelling (DCM) analysis. Solid lines: significant modulations (p < 0.05), dashed lines: no significant modulations; plus ‘+’ sign: positive modulations; minus ‘-’ sign: negative modulations; blue dots: stronger modulations for word than picture stimuli; red dots: stronger modulations for picture than word stimuli (see Table 3 for a list of all effects). (C) Task by connection interaction. Bars represent average modulatory connection strengths (in Hz) from superior to anterior vOT and from inferior to anterior vOT during the matching versus the speaking tasks. Error bars represent ± 1 standard error of the mean.