Literature DB >> 25685703

Functional connectivity classification of autism identifies highly predictive brain features but falls short of biomarker standards.

Mark Plitt1, Kelly Anne Barnes1, Alex Martin1.   

Abstract

OBJECTIVES: Autism spectrum disorders (ASD) are diagnosed based on early-manifesting clinical symptoms, including markedly impaired social communication. We assessed the viability of resting-state functional MRI (rs-fMRI) connectivity measures as diagnostic biomarkers for ASD and investigated which connectivity features are predictive of a diagnosis.
METHODS: Rs-fMRI scans from 59 high functioning males with ASD and 59 age- and IQ-matched typically developing (TD) males were used to build a series of machine learning classifiers. Classification features were obtained using 3 sets of brain regions. Another set of classifiers was built from participants' scores on behavioral metrics. An additional age and IQ-matched cohort of 178 individuals (89 ASD; 89 TD) from the Autism Brain Imaging Data Exchange (ABIDE) open-access dataset (http://fcon_1000.projects.nitrc.org/indi/abide/) were included for replication.
RESULTS: High classification accuracy was achieved through several rs-fMRI methods (peak accuracy 76.67%). However, classification via behavioral measures consistently surpassed rs-fMRI classifiers (peak accuracy 95.19%). The class probability estimates, P(ASD|fMRI data), from brain-based classifiers significantly correlated with scores on a measure of social functioning, the Social Responsiveness Scale (SRS), as did the most informative features from 2 of the 3 sets of brain-based features. The most informative connections predominantly originated from regions strongly associated with social functioning.
CONCLUSIONS: While individuals can be classified as having ASD with statistically significant accuracy from their rs-fMRI scans alone, this method falls short of biomarker standards. Classification methods provided further evidence that ASD functional connectivity is characterized by dysfunction of large-scale functional networks, particularly those involved in social information processing.

Entities:  

Keywords:  Autism; Biomarkers; Machine learning classification; Social brain

Mesh:

Year:  2014        PMID: 25685703      PMCID: PMC4309950          DOI: 10.1016/j.nicl.2014.12.013

Source DB:  PubMed          Journal:  Neuroimage Clin        ISSN: 2213-1582            Impact factor:   4.881


Introduction

Autism spectrum disorders (ASD) are clinically characterized by marked social and communication impairments as well as restricted interests and repetitive behaviors. Diagnosis is typically made in early childhood based on clinical interviews and observation of behavior. There is significant need for biomarkers to improve diagnostic precision when behavioral symptoms are equivocal and to identify infants or young children who might be at risk for ASD before reliable behavioral symptoms manifest (Yerys and Pennington, 2011). Recent studies applied multivariate classification techniques to neuroimaging data to characterize ASD using features that are predictive of a diagnosis on the level of individuals. These classifier studies achieved relatively high classification accuracy (~60–85%) using multiple imaging modalities including structural MRI (Sato et al., 2013; Ecker et al., 2010), diffusion tensor MRI (DTI) (Ingalhalikar et al. 2012; Lange et al., 2010), magnetoencephalography (Roberts et al., 2011) and resting-state functional MRI (rs-fMRI; which measures “functional connectivity”, correlations between spontaneous BOLD signal fluctuations in different brain regions) (Uddin et al., 2013; Nielsen and Zielinski, 2013; Anderson et al., 2011). Rs-fMRI is a particularly interesting technique as it can investigate, in a task-independent manner, the hypothesis that ASD involves the disruption of large-scale brain networks (Castelli et al., 2002; Belmonte et al., 2004). These multivariate techniques have provided convergent evidence about brain differences that underlie ASD and unveiled additional informative brain features. Given the recent success of these neuroimaging methods, it is tempting to cite these findings as grounds for establishing a neuroimaging-based diagnostic biomarker for ASD. However, several benchmarks must be met to fulfill the promise of neuroimaging-based biomarkers including: establishing standard analytic techniques, as such methodological factors influence connectivity measures (Jo et al., 2013; Gotts et al. 2013; Power et al., 2014); demonstrating biomarkers' robustness to variability across larger numbers of individuals and sites—to date, only one multisite classifier study exists (Nielsen, and Zielinski, 2013); and addressing the diagnostic potential of brain-based biomarkers by comparing their diagnostic or prognostic accuracy to that of simpler, more easily obtained ratings of behavior. The present study examines each of these issues. In this study, we determined the best methods for performing classification of ASD vs. TD participants using rs-fMRI data by applying several popular classification techniques to three separate sets of brain-based features. We also addressed classifier generalizability by including a large in-house cohort of high-functioning ASD individuals and typically developing (TD) individuals (118 total participants) and a replication cohort obtained from the ABIDE dataset (178 individuals). Given similar accuracies achieved using different methods in previous rs-fMRI ASD classification studies (Uddin et al., 2013; Nielsen, and Zielinski, 2013; Anderson et al., 2011) we expected that there would be little effect of classifier method or brain region set. Second, to determine the upper bounds of diagnostic performance using machine learning classification, we determined whether classification algorithms based on rs-fMRI data perform comparably to classifiers based on questionnaire data from the Social Responsiveness Scale (SRS) (Constantino, and Gruber, 2005). This questionnaire was expected to be highly predictive of ASD diagnoses as it is a measure of social functioning, the hallmark deficit in ASD. While SRS has been validated relative to “gold standard” interview and observation schedules, this measure is independent of the actual diagnosis criteria (Lord et al., 1994; Lord et al., 2000). The action of classifying participants as having a disorder characterized by social functioning deficits based on a measure of social functioning may be somewhat circular in its logic; however, the simplicity of the SRS and the ease of its administration make it an important benchmark of diagnostic utility for rs-fMRI based classification. In addition, such a behavioral classifier provides a more realistic ceiling of classifier performance that is tailored to the dataset in question. It is important to clarify that the SRS cannot be a biomarker as it is a clinical measure of social impairment designed to interrogate autistic symptoms. Performing classification on these measures simply gives an estimate of how well these individuals can be distinguished using a continuous measure of behavior that is independent of the diagnosis itself. Finally, we investigated which connectivity features and brain networks are most predictive of ASD and further, which connections track individual symptom expression. We identified a disperse set of connections throughout the brain that were highly predictive of an ASD diagnosis. Classification accuracy increased by including regions beyond those seen in meta-analyses of task-based fMRI studies.

Methods and materials

Participants

NIMH

Fifty-nine typically developing (TD) male participants (mean age ± standard deviation (SD) = 18.3 ± 3.05) and 59 high-functioning participants with an autism spectrum disorder (ASD, mean age ± SD = 17.66 ± 2.72) took part in the study, including 29 ASD and 28 TD participants previously described (Gotts et al., 2012). Participants with ASD were recruited from the Washington, DC, metropolitan area and met Diagnostic and Statistical Manual-IV diagnostic criteria as assessed by an experienced clinician. Scores on the SRS (Constantino, and Gruber, 2005), an informant-based rating scale used to assess social and communication traits quantitatively, were obtained from parents for all ASD participants and 45 TD participants. Participant groups did not differ in terms of full-scale IQ or age (Table 1). Informed assent and consent were obtained from all participants and/or their parent/guardian when appropriate in accordance with the National Institutes of Health Institutional Review Board approved protocol. See Appendix A.1 and Table 1 for further details.
Table 1

Demographic characteristics of in-house cohort.

TD (N = 59)
ASD (N = 59)
MeanSDMeanSD
Age18.33.0517.662.72
IQ115.7611.70111.0215.87
ADOS: soc + comm11.694.16
SRS19.8211.5491.7530.20
Whole brain tSNR324.5953.66314.3736.94
Average head movement (per TR)0.047.00190.069.042

ABIDE

The ABIDE dataset is an open-access multi-site image repository comprising structural and rs-fMRI scans from ASD and TD individuals (Di Martino et al., 2014). Acquisition parameters and protocol information can be found at http://fcon_1000.projects.nitrc.org/indi/abide/. Data from three of the five sites with the most subjects that met the following criteria were included in our analyses: males with a full-scale IQ > 80 and age within one standard deviation of the range of our in-house sample. Other sites were excluded due to excessive difficulties with anatomical FreeSurfer parcellation. The included sites were New York University (NYU), University of Utah School of Medicine (USM), and University of California Los Angeles 1 (UCLA_1). Participants were included if their scans met quality assurance standards (see Appendix A.2). These inclusion criteria and an additional step for matching ASD and TD prevalence resulted in a cohort of 178 individuals (89 TD; 89 ASD). Participant demographic and clinical data are provided in Inline Supplementary Table S1. Demographic characteristics of ABIDE cohort.

fMRI acquisition

Functional MRI data were collected using a GE, Signa 3T whole-body MRI scanner at the NIH Clinical Center NMR Research Facility. For each participant, a high-resolution T1-weighted anatomical image (MPRAGE) was obtained (124 axial slices, 1.2 mm slice thickness, field of view = 24 cm, 224 × 224 acquisition matrix). Spontaneous brain activity was measured during functional MRI using a gradient-echo echo-planar series with whole-brain coverage while participants maintained fixation on a central cross and were instructed to lie still and rest quietly (repetition time = 3500 ms, echo time = 27 ms, flip angle = 90°, 42 axial interleaved slices per volume, 3.0 mm slice thickness, field of view = 22 cm, 128 × 128 acquisition matrix, single-voxel volume = 1.7 × 1.7 × 3.0 mm). Each resting scan lasted 8 min, 10 s for a total of 140 consecutive whole-brain volumes. A GE 8-channel send–receive head coil was used for all scans, with a SENSE factor of 2 used to reduce gradient coil heating during the session.

fMRI preprocessing

fMRI data were preprocessed using AFNI software package (Cox, 1996) in accordance with pipelines recommended by Jo et al. (2013) with one exception: we did not employ cardiac and respiratory denoising so that a common preprocessing pipeline could be used on ABIDE data that lacked physiological measures. See Appendix A.2 for further details.

Connectivity measures and feature matrices

Three sets of regions of interest (ROIs) were used to create three separate fMRI timecourse correlation matrices for subjects' processed EPI time series. These ROI sets included one set of 49 spherical regions (5 mm radius) derived from coordinates in Di Martino et al. (2009), one set of 264 spherical regions (5 mm radius) from Power et al. (2011) and one set of 162 cortical and subcortical ROIs from each subject's FreeSurfer Destrieux atlas anatomical segmentation. Timecourses were extracted and averaged within each region. Linear correlations were computed between the average timecourses of each region in a ROI set and Fisher transformed. For each ROI set, this process yielded a N × N feature matrix, F, for use in classification, where N = number of subjects and N = number of features (Fisher transformed correlation values). F has an associated label vector, L, containing the diagnoses of the participants (ASD or TD) coded as a binary variable. Subjects' clinical and demographic data (age, IQ, and scores from SRS sub-scales for: social awareness, social cognition, social communication, social motivation, and autism mannerisms) were used to create an additional feature matrix for the NIMH cohort. TD participants who did not have an SRS score were excluded from this classifier.

Classification of region × region correlation matrices

Classification algorithms were implemented using Scikit-learn (Pedregosa et al., 2011). Leave-one-out (LOO) cross-validation was performed on each F using the following classification algorithms: Random forest (RF), K-Nearest Neighbor (KNN), Linear Support Vector Machines (L-SVM), Gaussian kernel support vector machines (rbf-SVM), L1-regularized logistic regression (L1LR), L2-regularized logistic regression (L2LR), Elastic-net-regularized logistic regression (ENLR), Gaussian Naïve Bayes (GNB), and Linear Discriminant Analysis (LDA). For each algorithm we report accuracy, sensitivity (proportion of ASD individuals correctly classified), specificity (proportion of TD individuals correctly classified), positive predictive value (PPV), and negative predictive value (NPV). Statistical significance was estimated using permutation tests. For the highest performing classifiers, cross-validation was repeated using stratified-3-fold and stratified-10-fold techniques. See Appendixes A.3 and A.4 for descriptions of cross-validation, hyperparameter tuning, and permutation testing.

Results

Comparison of classifier and feature set performance

In order to determine the best methods for classifying of ASD vs. TD participants using rs-fMRI, we performed LOO cross-validation on our in-house cohort using nine popular classification algorithms and features derived from three different ROI sets. High LOO cross-validation accuracy was achieved through several machine-learning algorithms across all three ROI sets (Table 2). However, two classification algorithms, L2LR (average accuracy 73.33%) and L-SVMs (average accuracy 73.89%), consistently performed the best. Additionally, the two larger ROI sets, the Power (264 regions) and Destrieux (162 regions) sets, yielded higher performing classifiers than the DiMartino (49 regions) ROI set.
Table 2

Cross-validation performance for the in-house cohort using the DiMartino, Power, and Destrieux ROI sets as well as a behavioral classifier.

Classifier typeAccuracySensitivitySpecificityPPVNPV
LOO-cross-validation
DiMartino ROI set
RF66.6771.6764.6765.1568.52
KNN60.8370.0051.6759.1563.27
L-SVM69.1771.6766.6768.2570.18
RBF-SVM66.6776.6756.6763.8970.83
GNB60.8373.3348.3358.6764.44
LDA66.6768.3365.0066.1367.24
L1LR61.6758.3365.0062.5060.94
L2LR67.5066.6768.3367.8067.21
ENLR72.5070.0075.0073.6871.43
Destrieux Atlas
RF66.6770.0063.3365.6367.86
KNN68.3365.0071.6769.6467.19
L-SVM74.5869.4979.6677.3672.31
RBF-SVM76.6770.0083.3380.7773.53
GNB60.0068.3351.6758.5762.00
LDA74.1771.6776.6775.4473.02
L1LR70.8375.0066.6769.2372.72
L2LR76.6775.0078.3377.5975.81
ENLR72.5071.6773.3372.8872.13
Power ROI set
RF65.0065.0065.0065.0065.00
KNN65.0073.3356.6762.8668.00
L-SVM75.8375.0076.6776.2775.41
RBF-SVM70.8373.3368.3369.8471.93
GNB60.8370.0051.6759.1563.27
LDA69.1773.3365.0067.6970.91
L1LR65.8370.0061.6764.6267.27
L2LR75.8375.0076.6776.2775.41
ENLR72.5075.0070.0071.4373.68
Behavior
RF91.3591.5391.1193.1089.13
KNN93.2691.5395.5696.4389.58
L-SVM90.3883.05100.00100.0081.82
RBF-SVM91.3584.75100.00100.0083.33
GNB95.1993.2297.7898.2191.67
LDA88.4681.3697.7897.9680.00
L1LR93.2793.2293.3394.8391.30
L2LR94.2393.2295.5696.4991.49
ENLR95.1994.9295.5696.5593.48
Stratified-10-fold cross-validation
DiMartino ROI set
L-SVM69.3966.3373.0073.5768.24
L2LR67.6567.6767.3366.9569.52
Destrieux atlas
L-SVM74.5571.6777.0081.5175.12
L2LR79.0973.3385.0083.3377.38
Power ROI set
L-SVM75.3073.0078.0079.5576.26
L2LR73.5672.6774.0077.6776.07
Behavior
L-SVM91.3684.67100.00100.0084.81
L2LR94.2793.3395.5096.9092.33
Stratified-3-fold cross-validation
DiMartino ROI set
L-SVM72.0579.5664.5669.1975.98
L2LR65.3467.7262.5466.1865.94
Destrieux atlas
L-SVM72.9371.4074.6573.7073.00
L2LR73.6869.4777.9875.6972.85
Power ROI set
L-SVM76.2676.1476.3277.0177.34
L2LR74.5778.0771.2374.1777.25
Behavior
L-SVM94.2693.3395.5696.7592.98
L2LR94.2394.9193.3394.9193.33

RF = Random Forests, KNN = K-Nearest Neighbor, L-SVM = Linear Support Vector Machine, RBF-SVM = Gaussian Kernel Support Vector Machine, GNB = Gaussian Naïve Bayes, LDA = Linear Discriminant Analysis, L1LR = L1 Logistic Regression, L2LR=L2 Logistic Regression, ENLR = Elastic-net Logistic Regression.

LOO cross-validation has high variance in its estimates of a classifier's true prediction error (Hastie et al., 2009), so we additionally performed stratified-10-fold and stratified-3-fold cross-validation for L2LR and L-SVM. Classifier performance remained stable (Table 1). All L2LR and L-SVM cross-validation accuracies were significant for each methodological variant (p < .001; chance accuracy 47.27–51.24%). Behavioral features alone outperformed fMRI classifiers across all learning algorithms (peak accuracy 95.19%; Table 2). Once again, classification accuracy was highly significant (p < 0.001; chance accuracy 50.50–51.84%). It is possible that the lower accuracies seen with fMRI-based classifiers were due to insufficient data to train the classifiers (“underfitting”). To address this problem, we repeated the analysis with a combined data pool of our in-house cohort and a matched cohort from the ABIDE dataset (combined N = 296). Cross-validation accuracy did not improve (Inline Supplementary Table S2). Cross-site differences, in either true underlying correlations or in nuisance variables, caused problems for these classifiers (see Inline Supplementary Fig. S1). Removing linear effects of site identity from correlation matrices or correcting for global level of correlation prior to cross-validation did not improve performance. There were no differences in temporal signal-to-noise ratio (tSNR) between ASD and TD scans in either the NIMH or the ABIDE cohorts. Cross-validation performance on the ABIDE dataset was markedly lower than on the combined datasets or in-house dataset (Inline Supplementary Table S3). This suggests that sample size may not be the sole factor limiting accuracy; across all methods rs-fMRI-based classifiers plateaued around ~ 75% accuracy. Cross-validation performance for the combined in-house and ABIDE cohort using the DiMartino, Power, and Destrieux ROI set. Cross-validation performance for the ABIDE cohort using the DiMartino, Power, and Destrieux ROI sets. Connectivity matrices from the two subject cohorts show significant whole-brain differences during ASD vs. TD comparisons. A series of thresholded t-tests (p < .005) on the connectivity matrices from the DiMartino (left column), Power (middle column), and Destrieux (right column) ROI sets are shown. Matrices show t-values from the comparisons. Comparing all of the NIMH data to all of the ABIDE data (top row), regardless of ASD or TD distinction, reveals whole-brain connectivity differences in all three ROI sets. The t-test of ASD-TD reveals incongruent group differences in the NIMH cohort (middle row) and ABIDE cohort (bottom row). Another possible explanation for the plateaued performance of the rs-fMRI classifiers is the high number of uninformative features included in the model. However, neither dimensionality reduction of the feature matrices (principal components analysis) nor univariate feature selection (t-test filters) markedly improved cross-validation accuracy (Inline Supplementary Table S4, Inline Supplementary Table S5). Stratified-10-fold cross-validation accuracy using principal components analysis to reduce the dimensionality of the feature set.1 During each fold of cross-validation principal components analysis is performed on the training data and the testing data is projected onto the resulting components. Stratified-10-fold cross-validation accuracy using t-test filters to select features during training.1 During each fold of cross-validation the training data is filtered to keep only the features that show differences between the ASD and TD scans (t-statistics). The classifier is only tested on these features as well.

Selection of most informative features

In order to determine which rs-fMRI features were most predictive of ASD or TD classification, we performed Recursive Feature Elimination (RFE) with stratified-10-fold cross-validation for the top performing classifiers (see Appendix A.5 for further details). L2LR and L-SVM performed comparably in cross-validation (Table 2, Inline Supplementary Table S2, Inline Supplementary Table S3). Feature weights for these two algorithms were also highly correlated (r > .97, Appendix B and Inline Supplementary Fig. S2) allowing us to choose one method for feature selection. As the variance in feature weights across cross-validation folds was smaller for L-SVM (4.68 × 10- 5) than for L2LR (1.47 × 10- 2), we only show feature weights associated with L-SVM (Fig. 1). The Power ROI set features (57 features, 0.164% of N; 85.10% accuracy) and Destrieux ROI set features (42 features, 0.322% of N ; 87.71% accuracy) showed the highest discriminability. We did not use RFE to boost classification accuracy, thus accuracies in this section simply measure the optimal feature subset’s ability to discriminate the two classes; they do not reflect the overall cross-validation accuracy. This measure is more akin to classifier training accuracy and does not directly estimate how predictive these features would be in a novel dataset. For the behavioral classifiers, we found that SRS social motivation, social cognition, and autism mannerisms sub-scales scores were the most predictive across all learning algorithms that weight features (Inline Supplementary Table S6). In addition, demographic features (age and IQ) were not predictive of diagnosis. In the next section, we characterize these highly informative regions and features on the level of interacting large-scale networks.
Fig. 1

Optimal feature subsets chosen via recursive feature elimination (RFE) for each ROI set by L-SVM. The feature weights shown are the average weights from LOO cross-validation. Spheres are centered at the ROI's center of mass, and sphere radius represents the number of features coincident on that region. Comparing sphere radii across ROI sets is not advised due to the difference in the number of regions, in the number of features chosen, and in the cross-validation accuracy stated in the text. Edge thickness indicates absolute value of feature weight in the L-SVM, and color indicates the sign of the feature. ‘Hotter’ edges indicate stronger connectivity in ASD individuals while ‘cooler’ edges represent indicate stronger connectivity in TD individuals.

Feature weights from the top performing linear classifiers are highly correlated. The Spearman correlation coefficient for each pair comparison is shown above the scatter plots (p < 10− 15). Magnitude of L2-normalized mean feature weights from LOO cross-validation for behavioral classifiers.1 For each classifier mean magnitude of feature weights are computed across LOO cross-validation folds and then scaled by the L2-norm of the resultant vector of feature weights.

Functional networks associated with top features

In the DiMartino and Power ROI sets but not the Destrieux ROI set, each ROI is associated with a functional network or task label. DiMartino et. al. labeled ROIs as being derived from either social or non-social tasks and by direction of group differences, yielding 4 possible labels for each region. Power et al labeled ROIs as belonging to 1 of 13 functional networks. In order to test whether functional networks, or connections between networks, were overrepresented in the optimal subset of features, we ran a χ2 test with permutations testing (see Appendix A.6 for further details). No regions or connections across regions were significantly overrepresented in the DiMartino feature set. For the Power ROI set, regions designated as the ‘default-mode network’ and the ‘frontal-parietal control network’ were more prevalent in the optimal subset of features (p < .01) (Inline Supplementary Fig. S3). See Table 3 for a list of the most diagnostic ROIs from the Power ROI set.
Table 3

Ranking of regions from the Power ROI set based on the sum of the absolute value of the feature weights coincident on that region chosen by RFE.

Region rankTalairach coordinates
Region label(103) ∑ |RFE feature weights|Sign of feature weights to regiona
xyz
1−46−6016L posterior STS/TPJ31.61
236101R Insula18.77
3−42441L IFG17.23+
4313125R MFG15.58+
510−2067R SMA13.97
613−464R SMA13.71
7−44042L precentral gyrus13.35+
8−51−612L posterior MTG11.94
9−423621L MFG11.92
1052135R ACC11.85+
1133−5338R IPS11.44±
1254−4532R supramarginal gyrus11.44+
13−57−29−5L STS11.33
1436214R anterior insula11.03
1527−35−13R parahippocampal gyrus10.91
1637−828R MOG10.79
17−11−5512L PCC10.52±
18−4631−9L IFG10.32
19−32342L dorsomedial prefrontal gyrus10.32+
20−34017L ACC10.21+
216650R ventromedial prefrontal cortex9.34+
22−52−4937L supramarginal gyrus9.07
2311−6635R precuneus8.96+
2443−7122R MOG8.78±
254617−24R temporal pole8.13
26−284922L SFG8.05+
27−204238L SFG8.05+
2855−458R posterior STS7.42
2957−51−14R posterior ITG6.80
3044−5441R IPL6.62+
3166123R SFG6.36+
3222−6541R superior parietal lobe6.28
33223638R MFG6.17+
34−351748L SFG6.10+
3551−1−14R MTG6.04
36−55−497L MTG5.91
3765−923R precentral gyrus5.83
3827−57−10R fusiform gyrus5.78
39−206121L MFG5.75+
40384116R MFG5.64+
41−15−69−10L lingual gyrus5.60+
42−33−76−15L fusiform gyrus5.60+
43−17−6056L superior parietal lobe5.56+
4446−571R posterior MTG5.45
45−84523L medial frontal gyrus5.34+
464980R Insula5.30
47482110R IFG5.30
4846−45−17R fusiform gyrus5.28
4937−6534R angular gyrus5.21+
5052323R IFG5.21+
5153−4318R TPJ5.18
522444−10R lateral orbitofrontal cortex5.17+
53482326R MFG5.15
54−42−5539L IPL5.07+
55−16−7628L cuneus5.07+
56264727R SFG5.03
57−38−3314L posterior insula4.89
58518−25R temporal lobe4.89
5929−2064R precentral gyrus4.78
6024−8518R MOG4.77
61−2−3527L PCC4.73+
62−41431L inferior frontal junction4.73+
63847−10R ventromedial prefrontal gyrus4.71
64−39−7537L angular gyrus4.70+
65−53−2438L postcentral gyrus4.67+
6664−22−17R ITG4.67+
67−47−73−12L fusiform4.65+
68−40−85−9L inferior occipital gyrus4.65+
69−59−2512L STG4.60+
7029−7619R MOG4.50
71−422329L MFG4.48+
7253−2930R IPL4.47
73−105139L SFG4.47
7417−76−32R cerebellum lobule VIIa4.41
75−50−3422L IPL4.36+
763437−8R lateral orbitofrontal cortex4.36
7717−88−16R occipital pole4.22+
7822−55−22R cerebellum lobule VI4.22+
796−7119R cuneus3.81
808−707R cuneus3.81
81−3−149L SMA3.79

IFG = inferior frontal gyrus, MFG = middle frontal gyrus, SFG = superior frontal gyrus, ITG = inferior temporal gyrus, MTG = middle temporal gyrus, STG = superior temporal gyrus, SMA = supplementary motor area, ACC = anterior cingulate cortex, PCC = posterior cingulate cortex, IPS = intraparietal sulcus, IPL = inferior parietal lobule, MOG = middle occipital gyrus.

While the majority of regions were associated with feature weights of the same sign (indicated by + or −), some regions were associated with both positive and negative feature weights (indicated by ±).

Classifier Results Correlate with Behavioral Ratings

In this final section, we test whether classifier techniques are sensitive to symptom expression beyond a binary ASD vs. TD decision by assessing whether feature weights correlated with participants’ SRS total sum scores. The probability estimate of a scan coming from an ASD or TD individual from the fMRI connectivity data alone derived from LOO cross-validation, P(ASD|fMRI data), was highly correlated subjects’ SRS sum scores (Table 4). L-SVMs trained and tested on the Power ROI set achieved the highest correlation with the SRS sum score (r = .51, p = 2.79 × 10- 8). Many top predictive features from the Power and Destrieux ROI sets also strongly correlated with SRS scores (Fig. 2). Correlation significance values were corrected for multiple comparisons (FDR25 < .05). The Power ROI set yielded a higher proportion of predictive features that were relevant to subject behavior (56.14% of most predictive features) than did the Destrieux ROI set (28.57% of most predictive features). Default mode network and frontal–parietal control network regions are highly prevalent in the most important features from the Power ROI set. The frequency of regions within particular functional networks among the top ranked features (red bars) chosen by RFECV are plotted along with the expected frequency of these functional networks if features were chosen at random (*s). The observed frequency of regions was significantly different from the expected frequency (p < .01).

Classifier results correlate with behavioral ratings

In this final section, we test whether classifier techniques are sensitive to symptom expression beyond a binary ASD vs. TD decision by assessing whether feature weights correlated with participants' SRS total sum scores. The probability estimate of a scan coming from an ASD or TD individual from the fMRI connectivity data alone derived from LOO cross-validation, P(ASD|fMRI data), was highly correlated with the subjects' SRS sum scores (Table 4). L-SVMs trained and tested on the Power ROI set achieved the highest correlation with the SRS sum score (r = .51, p = 2.79 × 10−8). Many top predictive features from the Power and Destrieux ROI sets also strongly correlated with SRS scores (Fig. 2). Correlation significance values were corrected for multiple comparisons (FDR (Benjamini, and Hochberg, 1995) < .05). The Power ROI set yielded a higher proportion of predictive features that were relevant to subject behavior (56.14% of most predictive features) than did the Destrieux ROI set (28.57% of most predictive features).
Table 4

Spearman's correlation of classifier's probability of labeling a scan as ASD (P(ASD|fMRI data)) and participants' SRS scores.

ROI setL2LR
L-SVM
rprp
DiMartino0.3195<.0010.3169<.001
Power0.4775≪.0010.5119≪.001
Destrieux0.4187≪.0010.4928≪.001
Fig. 2

The most predictive features from the Power and Destrieux ROI sets correlate with subjects' SRS sum scores. Spheres are centered at each ROI's center of mass, and sphere radius represents the number of significantly correlated features coincident on that region. Edge thickness indicates absolute value of the r-statistic. Edge color indicates the sign and magnitude of the r-statistic. Cooler colors indicate a negative correlation while warmer colors indicate a positive correlation. All correlations are significant at FDR < .05.

Discussion

In this study, we addressed the impact of classifier algorithm and ROI set on classification accuracy of rs-fMRI applied to ASD. Contrary to our prediction that there would be little effect of ROI or classifier choice on classification performance, these factors substantially impacted classification accuracy. L2LR and L-SVMs were the most successful in classifying ASD individuals from TD individuals. Further, restricting analysis to loci of group differences in task-based studies of ASD populations greatly impaired accuracy. We also found that accuracy achieved using simple behavior metrics, the SRS, far exceeded accuracy achieved with rs-fMRI. Nonetheless, classification techniques applied to rs-fMRI data still had demonstrable utility. These data-driven methods identified aberrant connections in ASD participants, revealing connections within specific networks that were predictive of an ASD diagnosis and correlated with symptom expression.

Implications for rs-fMRI as a diagnostic ASD biomarker

Despite promising results using rs-fMRI data to classify ASD and TD individuals' scans, we exercise substantial caution in heralding this technique as a potential ASD biomarker at this time for several reasons. The first reason is that rs-fMRI classification lacks the sensitivity and specificity of simple behavioral metrics (Table 1). We do not conclude that the SRS is a better “biomarker” than rs-fMRI metrics because SRS is in fact a behavioral measure of social impairment validated against “gold-standard” ASD clinical measures. By comparing rs-fMRI and behavioral classifiers, we show that a set of highly informative features independent of the diagnosis criteria can distinguish the individuals with high accuracy using the same statistical techniques. This procedure provides a classification benchmark more realistic than 100% accuracy. Thus, there is a gap in classification performance between the brain and behavior for which we must account. This gap remains when attempting to reduce the rs-fMRI classifiers to only the most relevant features (Fig. 1 and Inline Supplementary Table S5). The observed discrepancy between performance of brain-based and behavior-based classifiers does not appear to be specific to our cohort or methods. Our best-performing brain-based classifier (peak accuracy = 77%) performed on par with the two single-site rs-fMRI classifiers in the literature (peak accuracy (Anderson et al., 2011) = 78%; peak accuracy (Castelli et al., 2002) = 79%). This advantage of behavioral metrics may be due to many methodological and measurement factors including thermal scanner noise, stubborn head motion artifacts, non-uniform signal quality across the brain, and classifier overfitting. More intriguingly, however, the behavioral advantage may reflect neurobiological heterogeneity of ASD. The wide-range of symptom expression profiles seen in an ASD diagnosis may be due to neurobiological changes that are as variable or even more variable than the disorder itself, thus yielding an extremely difficult classification problem. The second reason for our qualified excitement is the decline in classifier accuracy when algorithms are trained on data from multiple sites. Though our multi-site classifier (peak accuracy = 75%) substantially outperformed a previous multi-site classifier (peak accuracy (Belmonte et al., 2004) = 60%) (Inline Supplementary Table S2, Inline Supplementary Table S3), large univariate differences in connectivity strength exist between sites (Inline Supplementary Fig. S1). Further work is needed to isolate causes of site effects (e.g., hardware, acquisition parameter, or cohort effects). The third reason is that rs-fMRI classification studies with autism fail to perform on the same level as ASD classification studies using other modalities (i.e. behavior (Williams et al., 2013) structural and diffusion MRI, (Ecker et al., 2010; Uddin et al., 2011) and molecular and genetic screens (Hu, and Lai, 2013)). Finally, prior studies and ours have focused on individuals who were older than the age of typical diagnosis (~2–6 years (Kleinman et al., 2008)) or the age at which reliable behavioral symptoms are detectable (~1–3 years (Bolton et al., 2012)). Presently, it is unclear whether existing classifiers are merely detecting circuit-level consequences of living with ASD for many years. The possible age-dependence of rs-fMRI based classifiers may hinder the generalizability of a given classifier across age and underscores the need to evaluate developmental disorders in a developmental context (Dosenbach et al., 2010; Karmiloff-Smith, 2013). An ideal biomarker would be measurable prior to behavioral symptom onset (Yerys, and Pennington, 2011) and able to distinguish developmental disorders of different etiologies.

Functional networks disrupted in ASD individuals

Irrespective of the utility of rs-fMRI as an ASD diagnosis biomarker, classifier methods remain valuable tools for investigating brain circuitry in ASD. We found that a disperse set of connections was highly predictive of an ASD diagnosis. Including regions beyond those seen in meta-analyses of task-based fMRI studies increased classification accuracy (Table 1). This accuracy increase with increasing coverage is consistent with the idea that ASD involves disruptions of interacting large-scale brain networks (Gotts et al., 2012). While both anatomically-defined (Destrieux) and functionally-defined (Power) whole-brain ROI sets performed similarly with regard to classification, the latter resulted in a higher proportion of features that correlated strongly with behavioral measures (Table 2 & Fig. 2). Functionally defined brain regions may be essential for characterizing functional connectivity differences in ASD individuals. Network labels assigned to regions from the Power ROI set enabled us to investigate canonical functional networks that showed atypical connectivity in ASD individuals. Fortuitously, the two best performing algorithms (L2LR and L-SVM) allow for the ready interpretation of feature weights, something that was not previously straightforward due to either feature selection methods (Nielsen, and Zielinski, 2013; Anderson et al., 2011) or choice of classifier algorithm. (Uddin et al., 2013) Connections involving the putative ‘default mode’ and ‘frontal-parietal task control’ networks were most predictive of ASD. These results are consistent with group-level functional connectivity analyses of high-functioning ASD populations documenting reduced correlation in ASD among regions of these networks (Just et al., 2012; Schipul et al., 2012; Kana et al., 2006) many of which are known to be involved in multiple aspects of social functioning (Frith, and Frith, 2007; Adolphs, 2009). Specifically, regions identified in the present study as most predictive of ASD included the insula, ventromedial prefrontal cortex, anterior, middle, and posterior regions of cingulate cortex, supplementary motor cortex, anterior temporal lobes, posterior aspects of the fusiform gyrus, posterior superior temporal sulcus, temporal parietal junction, intraparietal sulcus, and inferior and middle frontal gyri, bilaterally. These results also converge with other imaging and histological investigations of ASD. Structural MRI studies found cortical thickness and volume differences between ASD and TD individuals in several posterior temporal and parietal regions of these two networks (Wallace et al., 2010). A recent post-mortem study also found laminar disorganization in the posterior superior temporal cortex of children and adolescents with ASD (Stoner et al., 2014). Finally, atypical activity during theory of mind and mentalizing tasks has been observed in many of these regions (Castelli et al., 2002; Lombardo et al., 2010). Thus, the general pattern of findings is consistent with the hypothesis that ASD involves a fractionation of circuits that underlie social processing from other functional networks (Gotts et al., 2012).

Future directions

The aforementioned limitations of rs-fMRI as a diagnostic biomarker of ASD should not preclude further exploration of this imaging technique and machine-learning methods. This approach may be well suited for predicting the trajectory of the disorder, identifying clinical subgroups on the basis of a functional connectivity phenotype, or identifying individuals who may be more responsive to treatment. Autism is a heterogeneous developmental disorder with a range of symptom expression profiles, and rs-fMRI may have a role in explicating the root of this heterogeneity. Autism studies using other modalities have already had success using techniques that may accomplish this task (Ingalhalikar et al. 2012; Hu, and Lai, 2013). We anticipate that basic and clinical advances will result from studying functional brain networks with multivariate techniques.
Table S1

Demographic characteristics of ABIDE cohort.

TD (N=89)
ASD (N=89)
MeanSDMeanSD
Age17.585.6616.815.56
IQ110.2412.68104.312.89
ADOS: soc + comm11.893.34
SRS17.6213.3493.4832.62
Whole brain tSNR355.58129.64366.75188.36
Average head movement (per TR)0.07.0310.077.035
Table S2

Cross-validation performance for the combined in-house and ABIDE cohort using the DiMartino, Power, and Destrieux ROI set.

LOO cross-validation
Classifier type
Accuracy
Sensitivity
Specificity
PPV
NPV
DiMartino ROI set
RF58.1161.3354.7958.2357.97
KNN61.8249.3374.6666.6758.92
L-SVM64.5366.6762.3364.5264.54
RBF-SVM65.8865.3366.4466.6765.10
GNB53.0456.6749.3253.4652.55
LDA69.5989.3349.3264.4281.82
L1LR67.2365.3369.1868.5366.01
L2LR66.2266.0066.4466.8965.54
ENLR66.5566.6766.4467.1165.99
Power ROI setAccuracySensitivitySpecificityPPVNPV
RF55.4153.3357.5356.3454.55
KNN63.8554.0073.9768.0761.02
L-SVM73.6575.3371.9273.3873.94
RBF-SVM73.3174.8071.7773.0873.55
GNB53.1656.0049.3252.17
LDA69.5988.0050.6864.7180.43
L1LR72.3068.6776.0374.6470.25
L2LR69.2667.3371.2370.6367.97
ENLR67.2265.3369.1868.5366.01
Destrieux atlasAccuracySensitivitySpecificityPPVNPV
RF63.5165.3361.6463.6463.38
KNN58.4556.6760.2759.4463.38
L-SVM73.6574.0073.2974.0073.29
RBF-SVM70.6168.0073.2972.3469.03
GNB50.6860.6740.4151.1250.00
LDA72.9792.0053.4266.9086.67
L1LR73.6970.6776.7175.7171.79
L2LR75.0073.3376.7176.3973.68
ENLR69.2666.6771.9270.9267.77



Stratified -10-fold cross-validation

DiMartino ROI setAccuracySensitivitySpecificityPPVNPV

L-SVM68.6269.3367.9068.7568.79
L2LR65.9063.3368.4869.2764.19
Power ROI setAccuracySensitivitySpecificityPPVNPV
L-SVM69.6269.3369.9070.9169.98
L2LR68.5570.0067.1469.7368.62
Destrieux atlasAccuracySensitivitySpecificityPPVNPV
L-SVM69.8770.0069.9070.7370.26
L2LR73.2570.6775.9576.0172.66



Stratified-3-fold cross-validation

DiMartino ROI setAccuracySensitivitySpecificityPPVNPV

L-SVM66.2263.3367.1167.0365.77
L2LR67.236.0068.5268.3566.35
Power ROI setAccuracySensitivitySpecificityPPVNPV
L-SVM65.1666.6763.6165.7464.78
L2LR67.5765.3369.8869.1066.31
Destrieux atlasAccuracySensitivitySpecificityPPVNPV
L-SVM67.2162.0072.5671.0565.11
L2LR71.2768.6773.9273.2169.90
Table S3

Cross-validation performance for the ABIDE cohort using the DiMartino, Power, and Destrieux ROI sets.

LOO cross-validation
Classifier type
Accuracy
Sensitivity
Specificity
PPV
NPV
DiMartino ROI set
RF63.4867.0359.7763.5463.41
KNN53.9334.0774.7158.4952.00
L-SVM67.9872.5363.2267.3568.75
RBF-SVM70.2271.4368.9770.6569.77
GNB58.4365.9350.5758.2568.67
LDA65.1764.8465.5266.2964.04
L1LR66.2961.5471.2669.1463.92
L2LR69.1073.6364.3768.3770.00
ENLR65.1765.9364.3765.9364.37
Power ROI setAccuracySensitivitySpecificityPPVNPV
RF56.1858.2454.0256.9955.29
KNN52.2535.1670.1155.1750.83
L-SVM69.1070.3367.8269.5768.60
RBF-SVM59.5560.4458.6260.4458.62
GNB55.0667.0342.5354.9555.22
LDA57.8762.6452.8758.1657.50
L1LR56.7457.1456.3257.7855.68
L2LR65.1761.5468.9767.4763.16
ENLR52.8156.0449.4353.6851.81
Destrieux atlasAccuracySensitivitySpecificityPPVNPV
RF48.3149.4547.1349.4547.13
KNN56.7436.2678.1663.4653.97
L-SVM70.7971.4370.1171.4370.11
RBF-SVM66.8568.1365.5267.3966.28
GNB53.9364.8442.5354.1353.62
LDA59.5563.7455.1759.8059.26
L1LR68.5469.2367.8269.2367.82
L2LR71.3570.3372.4172.7370.00
ENLR65.1763.7466.6766.6763.74



Stratified-10-fold cross-validation

DiMartino ROI setAccuracySensitivitySpecificityPPVNPV

L-SVM65.1364.6765.1466.1365.13
L2LR69.7172.4466.6769.7670.72
Power ROI setAccuracySensitivitySpecificityPPVNPV
L-SVM62.4864.8960.2864.3363.48
L2LR62.3958.4466.8165.5260.74
Destrieux atlasAccuracySensitivitySpecificityPPVNPV
L-SVM66.9068.1165.6967.1867.96
L2LR67.9466.117069.9267.65



Stratified-3-fold cross-validation

DiMartino ROI setAccuracySensitivitySpecificityPPVNPV

L-SVM61.2463.8758.6261.5962.44
L2LR64.0453.8474.7169.3661.26
Power ROI setAccuracySensitivitySpecificityPPVNPV
L-SVM60.6961.4759.7762.1159.57
L2LR59.0052.7665.5261.2757.20
Destrieux atlasAccuracySensitivitySpecificityPPVNPV
L-SVM64.0663.8464.3765.1363.37
L2LR62.3758.2866.6764.5960.49
Table S4

Stratified-10-fold cross-validation accuracy using principal components analysis to reduce the dimensionality of the feature set.1


Number of principal components
Classifier type151050100
NIH dataset:
Dimartino ROI set
L-SVM51.5960.2362.5070.3069.24
L2LR58.4168.4868.7970.3869.47
Power ROI set
L-SVM52.4265.6168.7972.9574.62
L2LR59.4767.8072.1274.7777.58
Destrieux ROI set
L-SVM51.5971.2171.9776.3675.61
L2LR59.4771.3671.9776.3678.79



NIH & ABIDE dataset:
Dimartino ROI set
L-SVM50.6950.3655.4762.7864.23
L2LR48.0051.6659.7861.1866.93
Power ROI set
L-SVM50.6956.4357.7063.8670.25
L2LR47.3058.0958.1064.4665.17
Destrieux ROI set
L-SVM50.6951.3759.0966.2266.53
L2LR46.9957.4361.5267.2666.87

During each fold of cross-validation principal components analysis is performed on the training data and the testing data is projected onto the resulting components.

Table S5

Stratified-10-fold cross-validation accuracy using t-test filters to select features during training.1


Percent of total features kept in model
Classifier type0.01%0.10%1%5%10%25%50%75%
NIH dataset:
Dimartino ROI set
L-SVM47.4247.4258.2660.7660.7665.9168.6370.30
L2LR50.0050.9850.1561.0665.3870.3069.6270.23
Power ROI set
L-SVM59.0958.2667.0567.8869.3974.3973.6374.70
L2LR61.7468.6466.8972.2072.0572.2074.7073.86
Destrieux ROI set
L-SVM54.9260.7674.4772.8869.3970.9875.5375.15
L2LR55.0066.0669.6271.0673.7179.7073.8177.12



NIH & ABIDE dataset:
Dimartino ROI set
L-SVM50.6950.6950.6950.6950.6964.1861.9263.79
L2LR50.0054.7261.8963.1760.0265.2466.8366.28
Power ROI set
L-SVM50.6950.6965.1365.1763.8368.2167.5968.91
L2LR56.0864.1661.1369.2866.9067.1669.9171.61
Destrieux ROI set
L-SVM50.6950.6961.5767.6066.3169.3169.5568.29
L2LR59.1161.7764.5370.6472.6471.6373.3273.61

During each fold of cross-validation the training data is filtered to keep only the features that show differences between the ASD and TD scans (t-statistics). The classifier is only tested on these features as well.

Table S6

Magnitude of L2-normalized mean feature weights from LOO cross-validation for behavioral classifiers.1


SRS
Classifier typeAgeIQSocial awarenessSocial cognitionSocial communicationSocial motivationAutism mannerisms
RF.045.051.165.498.394.567.494
L-SVM.031.006.324.433.431.488.532
L1LR.092.000.354.562.287.543.416
L2LR.072.022.223.573.257.573.471
EN.046.005.117.594.280.598.441
Mean.057.017.237.532.330.554.471

For each classifier mean magnitude of feature weights are computed across LOO cross-validation folds and then scaled by the L2-norm of the resultant vector of feature weights.

  36 in total

1.  Atypical diffusion tensor hemispheric asymmetry in autism.

Authors:  Nicholas Lange; Molly B Dubray; Jee Eun Lee; Michael P Froimowitz; Alyson Froehlich; Nagesh Adluru; Brad Wright; Caitlin Ravichandran; P Thomas Fletcher; Erin D Bigler; Andrew L Alexander; Janet E Lainhart
Journal:  Autism Res       Date:  2010-12-02       Impact factor: 5.216

2.  Salience network-based classification and prediction of symptom severity in children with autism.

Authors:  Lucina Q Uddin; Kaustubh Supekar; Charles J Lynch; Amirah Khouzam; Jennifer Phillips; Carl Feinstein; Srikanth Ryali; Vinod Menon
Journal:  JAMA Psychiatry       Date:  2013-08       Impact factor: 21.596

3.  Distinctive neural processes during learning in autism.

Authors:  Sarah E Schipul; Diane L Williams; Timothy A Keller; Nancy J Minshew; Marcel Adam Just
Journal:  Cereb Cortex       Date:  2011-07-01       Impact factor: 5.357

4.  Effective Preprocessing Procedures Virtually Eliminate Distance-Dependent Motion Artifacts in Resting State FMRI.

Authors:  Hang Joon Jo; Stephen J Gotts; Richard C Reynolds; Peter A Bandettini; Alex Martin; Robert W Cox; Ziad S Saad
Journal:  J Appl Math       Date:  2013-05-21

5.  Sentence comprehension in autism: thinking in pictures with decreased functional connectivity.

Authors:  Rajesh K Kana; Timothy A Keller; Vladimir L Cherkassky; Nancy J Minshew; Marcel Adam Just
Journal:  Brain       Date:  2006-07-10       Impact factor: 13.501

6.  The autism diagnostic observation schedule-generic: a standard measure of social and communication deficits associated with the spectrum of autism.

Authors:  C Lord; S Risi; L Lambrecht; E H Cook; B L Leventhal; P C DiLavore; A Pickles; M Rutter
Journal:  J Autism Dev Disord       Date:  2000-06

7.  How do we establish a biological marker for a behaviorally defined disorder? Autism as a test case.

Authors:  Benjamin E Yerys; Bruce F Pennington
Journal:  Autism Res       Date:  2011-06-24       Impact factor: 5.216

Review 8.  Social cognition in humans.

Authors:  Chris D Frith; Uta Frith
Journal:  Curr Biol       Date:  2007-08-21       Impact factor: 10.834

9.  Investigating the predictive value of whole-brain structural MR scans in autism: a pattern classification approach.

Authors:  Christine Ecker; Vanessa Rocha-Rego; Patrick Johnston; Janaina Mourao-Miranda; Andre Marquand; Eileen M Daly; Michael J Brammer; Clodagh Murphy; Declan G Murphy
Journal:  Neuroimage       Date:  2009-08-14       Impact factor: 6.556

10.  The autism brain imaging data exchange: towards a large-scale evaluation of the intrinsic brain architecture in autism.

Authors:  A Di Martino; C-G Yan; Q Li; E Denio; F X Castellanos; K Alaerts; J S Anderson; M Assaf; S Y Bookheimer; M Dapretto; B Deen; S Delmonte; I Dinstein; B Ertl-Wagner; D A Fair; L Gallagher; D P Kennedy; C L Keown; C Keysers; J E Lainhart; C Lord; B Luna; V Menon; N J Minshew; C S Monk; S Mueller; R-A Müller; M B Nebel; J T Nigg; K O'Hearn; K A Pelphrey; S J Peltier; J D Rudie; S Sunaert; M Thioux; J M Tyszka; L Q Uddin; J S Verhoeven; N Wenderoth; J L Wiggins; S H Mostofsky; M P Milham
Journal:  Mol Psychiatry       Date:  2013-06-18       Impact factor: 15.992

View more
  77 in total

1.  Diagnosis of Autism Spectrum Disorders in Young Children Based on Resting-State Functional Magnetic Resonance Imaging Data Using Convolutional Neural Networks.

Authors:  Maryam Akhavan Aghdam; Arash Sharifi; Mir Mohsen Pedram
Journal:  J Digit Imaging       Date:  2019-12       Impact factor: 4.056

2.  Combination of rs-fMRI and sMRI Data to Discriminate Autism Spectrum Disorders in Young Children Using Deep Belief Network.

Authors:  Maryam Akhavan Aghdam; Arash Sharifi; Mir Mohsen Pedram
Journal:  J Digit Imaging       Date:  2018-12       Impact factor: 4.056

3.  Ensemble learning with 3D convolutional neural networks for functional connectome-based prediction.

Authors:  Meenakshi Khosla; Keith Jamison; Amy Kuceyeski; Mert R Sabuncu
Journal:  Neuroimage       Date:  2019-06-18       Impact factor: 6.556

4.  Random forest based classification of alcohol dependence patients and healthy controls using resting state MRI.

Authors:  Xi Zhu; Xiaofei Du; Mike Kerich; Falk W Lohoff; Reza Momenan
Journal:  Neurosci Lett       Date:  2018-04-04       Impact factor: 3.046

Review 5.  Neuroimaging in Psychiatry and Neurodevelopment: why the emperor has no clothes.

Authors:  Ashley N Anderson; Jace B King; Jeffrey S Anderson
Journal:  Br J Radiol       Date:  2019-03-15       Impact factor: 3.039

6.  Enhancing the representation of functional connectivity networks by fusing multi-view information for autism spectrum disorder diagnosis.

Authors:  Huifang Huang; Xingdan Liu; Yan Jin; Seong-Whan Lee; Chong-Yaw Wee; Dinggang Shen
Journal:  Hum Brain Mapp       Date:  2018-10-25       Impact factor: 5.038

Review 7.  Machine learning studies on major brain diseases: 5-year trends of 2014-2018.

Authors:  Koji Sakai; Kei Yamada
Journal:  Jpn J Radiol       Date:  2018-11-29       Impact factor: 2.374

8.  Using connectome-based predictive modeling to predict individual behavior from brain connectivity.

Authors:  Xilin Shen; Emily S Finn; Dustin Scheinost; Monica D Rosenberg; Marvin M Chun; Xenophon Papademetris; R Todd Constable
Journal:  Nat Protoc       Date:  2017-02-09       Impact factor: 13.491

9.  Incorporating prior information with fused sparse group lasso: Application to prediction of clinical measures from neuroimages.

Authors:  Joanne C Beer; Howard J Aizenstein; Stewart J Anderson; Robert T Krafty
Journal:  Biometrics       Date:  2019-06-17       Impact factor: 2.571

10.  DisConICA: a Software Package for Assessing Reproducibility of Brain Networks and their Discriminability across Disorders.

Authors:  Mohammed A Syed; Zhi Yang; D Rangaprakash; Xiaoping Hu; Michael N Dretsch; Jeffrey S Katz; Thomas S Denney; Gopikrishna Deshpande
Journal:  Neuroinformatics       Date:  2020-01
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.