| Literature DB >> 31736698 |
Philippe Albouy1,2, Anne Caclin3,4, Sam V Norman-Haignere5,6, Yohana Lévêque4,7, Isabelle Peretz2, Barbara Tillmann4,7, Robert J Zatorre1,2.
Abstract
Machine learning classification techniques are frequently applied to structural and resting-state fMRI data to identify brain-based biomarkers for developmental disorders. However, task-related fMRI has rarely been used as a diagnostic tool. Here, we used structural MRI, resting-state connectivity and task-based fMRI data to detect congenital amusia, a pitch-specific developmental disorder. All approaches discriminated amusics from controls in meaningful brain networks at similar levels of accuracy. Interestingly, the classifier outcome was specific to deficit-related neural circuits, as the group classification failed for fMRI data acquired during a verbal task for which amusics were unimpaired. Most importantly, classifier outputs of task-related fMRI data predicted individual behavioral performance on an independent pitch-based task, while this relationship was not observed for structural or resting-state data. These results suggest that task-related imaging data can potentially be used as a powerful diagnostic tool to identify developmental disorders as they allow for the prediction of symptom severity.Entities:
Keywords: brain-based biomarkers; diagnostic; multivariate pattern analysis (MVPA); rs-fMRI; sMRI; task-based fMRI; tone deafness
Year: 2019 PMID: 31736698 PMCID: PMC6831619 DOI: 10.3389/fnins.2019.01165
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Demographic characteristics of the full sample of amusics and controls.
| Age in years | 42.4 (14.6) | 40.8 (14.0) | |
| Gender | 11F, 7M | 11F, 7M | N/A |
| Education in years | 15.0 (3.6) | 13.9 (3.1) | |
| Musical education in years | 0.83 (1.4) | 0.33 (1.0) | |
| MBEA ( | 20.9 (1.5) | 26.7 (1.4) | |
| PDT ( | 0.90 (0.88) | 0.22 (0.15) |
FIGURE 1(A) Pitch localizer, schematic of the experimental design. fMRI responses were measured to harmonic tones and Gaussian noise spanning the same frequency range. Stimuli (denoted by horizontal bars) were presented in a block design, with six stimuli from the same condition presented successively in each block (red and blue indicate different conditions). Each stimulus (2 s) included several notes that varied in frequency to minimize adaptation. Cochleograms are shown for an example harmonic tone stimulus (red bar) and an example noise stimulus (blue bar). Cochleograms plot time–frequency decompositions, similar to a spectrogram, that summarize the cochlea’s response to sound. After each stimulus, a single scan was collected (vertical, gray bars). Adapted from Norman-Haignere et al. (2016). (B) Auditory tasks. Examples of the stimuli used in Memory and Perception Tasks. Memory Task: Participants had to compare sequences (tones or words) presented in pairs. For “same” trials the first sequence was repeated as the second sequence of the pair after a 9000 ms delay. For “different” trials, the second sequence of the pair changed only for one item (in positions 1 to 3, red square). For tonal material, the new item changed the melodic contour. Perception Task: Participants had to compare the two last items (tones or words) of the second sequence regardless of the first sequence. For “same” trials, the two last items of the second sequence were identical. For “different” trials, the two last items of the second sequence were different. Adapted from Albouy et al. (2019). (C) Design for the fMRI experiment and timeline of events during one trial. S1 sequence (pitch sequences, words) lasted 750 ms and was followed by a constant 9000 ms silent delay during which occurred 3000 ms of functional data acquisition which was followed by the second sequence (750 ms). Participants had 2000 ms to respond, the next trial occurring 2500 to 3000 ms after the end of S2. A 0 to 500 ms jitter was added at the beginning of the trial to maximize the detection of the BOLD response for the task. As a function of the run, the acquisition of the whole brain volume was realized at two different time periods. Left panel: For Encoding runs (two runs, pitch Material only), acquisition started 3500 to 4000 ms after the end of the S1 sequence. For Maintenance runs (two runs for pitch tasks and two runs for verbal tasks), the volume acquisition occurred just before the second sequence (at the end of the silent delay), the acquisition thus starting from 5500 to 6000 ms after the end of S1. Adapted from Albouy et al. (2019). (D) Right Panel: Performance of amusic and control groups (white, Controls; red, Amusics) in terms of dprime, presented as a function of Material (pitch, words), and Group (amusics, controls) for the short-term memory tasks. Error bars indicate SEM. Adapted from Albouy et al. (2019). (E) Group classification results for structural (sMRI), resting state functional connectivity (rs-fMRI), and task related fMRI [pitch localizer (PL); pitch memory (PM); verbal memory (VM)]. Results are expressed as area under the receiver-operator-characteristic curve (AUC). AUC uses the distance of a classification output to the decision boundary. Violin plots represent the mean and the median of the AUC in brain regions that were significantly classifying amusics and controls as revealed by searchlight analysis (black dots indicate significant searchlights for each analysis).
FIGURE 2Group classification results for structural data (White Matter). Results are displayed on a single participant T1 in the MNI space provided by SPM12. Bar plots represent sensitivity (red) and specificity (white) of the classifier.
FIGURE 3Group classification for the resting state connectivity data. Classification was performed on whole brain connectivity maps with seeds in the left (A) and right (B) auditory cortices. Results are displayed single participant T1 in the MNI space provided by SPM12. Bar plots represent sensitivity (red) and specificity (white) of the classifier.
Coordinates of regions of significantly above chance level decoding for the rs-fMRI data.
| Right AC | L | Middle cingulate gyrus | −10 10 42 | 606 | |
| Gyrus rectus∗ | −4 23 −24 | 313 | |||
| Angular gyrus∗ | −35 −63 45 | 432 | |||
| Inferior temporal gyrus | −57 −38 −23 | 528 | |||
| Left AC | R | Angular gyrus∗ | 57 −50 28 | 336 | |
| Middle frontal gyrus∗ | 41 26 35 | 404 | |||
| Superior frontal gyrus | 17 58 25 | 259 | |||
| Post-central gyrus | 17 −36 77 | 258 |
FIGURE 4Group classification results for task-related functional imaging. (A) Group classification for the tonal short-term memory data. (B) Group classification for the pitch localizer data. Scatter plot indicates classification decision values against behavioral performance in a pitch memory task Results are displayed single participant T1 in the MNI space provided by SPM12. Bar plots represent sensitivity (red) and specificity (white) of the classifier.