Literature DB >> 23630491

Characterizing Functional Connectivity Differences in Aging Adults using Machine Learning on Resting State fMRI Data.

Svyatoslav Vergun1, Alok S Deshpande, Timothy B Meier, Jie Song, Dana L Tudorascu, Veena A Nair, Vikas Singh, Bharat B Biswal, M Elizabeth Meyerand, Rasmus M Birn, Vivek Prabhakaran.   

Abstract

The brain at rest consists of spatially distributed but functionally connected regions, called intrinsic connectivity networks (ICNs). Resting state functional magnetic resonance imaging (rs-fMRI) has emerged as a way to characterize brain networks without confounds associated with task fMRI such as task difficulty and performance. Here we applied a Support Vector Machine (SVM) linear classifier as well as a support vector machine regressor to rs-fMRI data in order to compare age-related differences in four of the major functional brain networks: the default, cingulo-opercular, fronto-parietal, and sensorimotor. A linear SVM classifier discriminated between young and old subjects with 84% accuracy (p-value < 1 × 10(-7)). A linear SVR age predictor performed reasonably well in continuous age prediction (R (2) = 0.419, p-value < 1 × 10(-8)). These findings reveal that differences in intrinsic connectivity as measured with rs-fMRI exist between subjects, and that SVM methods are capable of detecting and utilizing these differences for classification and prediction.

Entities:  

Keywords:  aging; reorganization; resting state fMRI; support vector machine

Year:  2013        PMID: 23630491      PMCID: PMC3635030          DOI: 10.3389/fncom.2013.00038

Source DB:  PubMed          Journal:  Front Comput Neurosci        ISSN: 1662-5188            Impact factor:   2.380


Introduction

Functional networks are defined by a temporal correlation of brain regions normally involved during a task and are observed when individuals are resting without performing a specific task (Biswal et al., 1995). Research efforts in functional magnetic resonance imaging (fMRI) are shifting focus from studying specific cognitive domains like vision, language, memory, and emotion to assessing individual differences in neural connectivity across multiple whole-brain networks (Thomason et al., 2011). Subsequently, an increasing number of studies using rs-fMRI data, are showing reproducibility and reliability (Damoiseaux et al., 2006; Shehzad et al., 2009; Van Dijk et al., 2010; Zuo et al., 2010; Thomason et al., 2011; Song et al., 2012), for studying functional connectivity of the human brain. Simultaneously, use of machine learning techniques for analyzing fMRI data has increased in popularity. In particular, Support Vector Machines (SVMs) have become widely used due to their ability to handle very high-dimensional data and their classification and prediction accuracy (Schölkopf and Smola, 2002; Ben-Hur and Weston, 2010; Meier et al., 2012). Various fMRI data analysis methods are currently used including seed-based analysis, independent component analysis (ICA), graph theory methods, but in this work we chose SVMs because they, unlike the others, offer the ability to classify and predict individual scans and output relevant features. A growing number of studies have shown that machine learning tools can be used to extract exciting new information from neuroimaging data (see Haynes and Rees, 2005; Norman et al., 2006; Cohen et al., 2011 for selective reviews). With task-based fMRI data, LaConte et al. (2007) observed 80% classification accuracy of real-time brain state prediction using a linear kernel SVM on whole-brain, block-design, motor data and Poldrack et al. (2009) achieved 80% classification accuracy of predicting eight different cognitive tasks that an individual performed using a multi-class SVM (mcSVM) method. Resting state fMRI data has been shown viable in classification and prediction. Craddock et al. (2009) used resting state functional connectivity MRI (rs-fcMRI) data to successfully distinguish between individuals with major depressive disorder from healthy controls with 95% accuracy using a linear classifier with a reliability filter for feature selection. Supekar et al. (2009) classified individuals as children or young-adults with 90% accuracy using a SVM classifier. Shen et al. (2010) achieved 81% accuracy for discrimination between schizophrenic patients and healthy controls using a SVM classifier and achieved 92% accuracy using a C-means clustering classifier with locally linear embedding (LLE) feature selection. Dosenbach et al. (2010), using a SVM method, achieved 91% accuracy for classification of individuals as either children or adults, and also predicted functional maturity for each participant’s brain using support vector machine regression (SVR). One advantage of resting state data as opposed to task-based data is that the acquiring of resting data is not constrained by task difficulty and performance. This provides a potentially larger group of subjects that are not able to perform tasks (e.g., Alzheimer’s Disease patients, patients with severe stroke) on which studies can be done. There has been a great amount of progress made in describing typical and atypical brain activity at the group level with the use of fMRI, but, determining whether single fMRI scans contain enough information to classify and make predictions about individuals remains a critical challenge (Dosenbach et al., 2010). Our method builds on the classification and prediction of individual scans using multivariate pattern recognition algorithms, adding to this currently novel domain in the literature. We describe a classification and regression method implemented on aging adult rs-fcMRI data using SVMs, extracting relevant features, and building on the SVM/SVR study of children to middle-aged subjects (Dosenbach et al., 2010) and aging adults (Meier et al., 2012). SVM has been applied to a wide range of datasets, but has only recently been applied to neuroimaging-fMRI data, especially resting fMRI data which is still relatively novel. This work expands upon and adds to the relatively new literature of resting fMRI based classification and prediction. Our objective was to investigate the ability of the SVM classifier to discriminate between individuals with respect to age and the ability of the SVR predictor to determine individuals’ age using only functional connectivity MRI data. Beyond binary SVM classification and SVR prediction, our work investigates multi-class classification and linear weights for evaluating feature importance of healthy aging adults.

Materials and Methods

Participants

Resting state data for 65 individuals (three scans each) were obtained from the ICBM dataset made freely accessible online by the 1000 Connectome Project. Each contributor’s respective ethics committee approved submission of the de-identified data. The institutional review boards of NYU Langone Medical Center and New Jersey Medical School approved the receipt and dissemination of the data (Biswal et al., 2010).

Data sets

The analyses described in this work were performed on two data sets contained in the ICBM set. The same preprocessing algorithms were applied to both sets of data. Data set 1 consisted of 52 right-handed individuals (age 19–85, mean 44.7, 23M/29F). This was the binary SVM set (both for age and gender classification) which contained a young group of 26 subjects (age 19–35, mean 24.7, 12M/14F) and an old group of 26 subjects (age 55–85, mean 64.7, 11M/15F). Data set 2 consisted of 65 right-handed individuals (ages 19–85, mean 44.9, 32M/33F). This was the mcSVM set as well as the SVR age prediction set. It contained three age groups used for mcSVM: a young group of 28 subjects (age 19–37, mean 25.5, 14M/14F), a middle-aged group of 22 subjects (age 42–60, mean 52.4, 11M/11F), and an old group of 15 subjects (age 61–85, mean 69.9, 7M/8F).

Data acquisition

Resting data were acquired with a 3.0 Tesla scanner using an echo planar imaging (EPI) pulse sequence. Three resting state scans were obtained for each participant, and consisted of 128 continuous resting state volumes (TR = 2000 ms; matrix = 64 × 64; 23 axial slices). Scan 1 and 3 had an acquisition voxel size = 4 mm × 4 mm × 5.5 mm, while scan 2 had an acquisition voxel size = 4 mm × 4 mm × 4 mm. All participants were asked to keep their eyes closed during the scan. For spatial normalization and localization, a T1-weighted anatomical image was acquired using a magnetization prepared gradient echo sequence (MP-RAGE, 160 sagittal slices, voxel size = 1 mm × 1 mm × 1 mm).

Data preprocessing

Data were preprocessed using AFNI (version AFNI_2009_12_31_1431), FSL (version 4.1.4), and the NITRC 1000 functional connectome preprocessing scripts made freely available online (version 1.1) (Neuroimaging Informatics Tools and Resources Clearinghouse (NITRC), 2011). Initial preprocessing using AFNI consisted of (1) slice time correction for interleaved acquisition using Fourier-space time series phase-shifting, (2) motion correction of time series by aligning each volume to the mean image using Fourier interpolation, (3) skull stripping, and (4) getting an eighth image for use in registration. Preprocessing using FSL consisted of (5) spatial smoothing using a Gaussian kernel of full-width half maximum = 6 mm, and (6) grand-mean scaling of the voxel values. The data were then temporally filtered (0.005–0.1 Hz) and detrended to remove linear and quadratic trends using AFNI. A mask of preprocessed data for each person was generated.

Nuisance signal regression

Nuisance signal [white matter, cerebrospinal fluid (CSF) and six motion parameters] was then removed from the preprocessed fMRI data. White matter and CSF masks were created using FSL by the segmentation of each individual’s structural image. These masks were then applied to each volume to remove the white matter and CSF signal. Following the removal of these nuisance signals, functional data were then transformed into Montreal Neurological Institute 152 (MNI152-brain template; voxel size = 3 mm × 3 mm × 3 mm) space using a two-step process. First a 6 degree-of-freedom affine transform was applied using FLIRT (Smith et al., 2004) to align the functional data into anatomical space. Then, the anatomical image was aligned into standard MNI space using a 12 degree-of-freedom affine transform implemented in FLIRT. Finally, the resulting transform was then applied to each subject’s functional dataset.

ROI based functional connectivity

One hundred functionally defined regions of interest (ROIs) encompassing the default mode, cingulo-opercular, fronto-parietal, and sensorimotor networks (see Figure 1), were selected in agreement with a previous study by Dosenbach et al. (2010) and Meier et al. (2012). Each ROI was defined by a sphere (radius = 5 mm) centered about a three-dimensional point with coordinates reported in MNI space.
Figure 1

Functional ROIs used in the study. Each ROI is spherical with a 5 mm radius.

Functional ROIs used in the study. Each ROI is spherical with a 5 mm radius. Average resting state blood oxygenation level dependent (BOLD) time series for each ROI were extracted. The BOLD time series for each ROI were then correlated with the BOLD time series of every other ROI (Pearson’s correlation) for every subject and every scan. This resulted in a square (100 × 100) symmetric matrix of correlation coefficients for each scan, but only 4950 ROI-pair correlation values from the lower triangular part of the matrix were retained (redundant elements and diagonal elements were excluded). These were then z-transformed (Fisher’s z transformation) for normalization. These 4950 values of the functional connectivity matrix were subsequently used as features in the SVM and SVR methods. Figure 2 shows a series of steps in a representative pipeline of the classification method.
Figure 2

Pipeline of the classification method.

Pipeline of the classification method.

Support vector machine classification and regression

The SVM is a widely used classification method due to its favorable characteristics of high accuracy, ability to deal with high-dimensional data and versatility in modeling diverse sources of data (Schölkopf et al., 2004). We chose this method of classification due to its sensitivity, resilience to overfitting, ability to extract and interpret features, and recent history of impressive neuroimaging results (Mitchell et al., 2008; Soon et al., 2008; Johnson et al., 2009; Dosenbach et al., 2010; Schurger et al., 2010; Meier et al., 2012). A SVM is an example of a linear two-class classifier, which is based on a linear discriminant function: The vector w is the weight vector, b is called the bias and i is the i-th example in the dataset. In our study we have a dataset of n examples each of p retained features, i ∈ ℝ, where n is the number of subjects and p is the number of retained ROI-pair correlation values after t-test filtering. Each example i has a user defined label yi = +1 or −1, corresponding to the class that it belongs to. In this work binary participant classes are young or old and male or female subjects. A brief description of the SVM optimization problem is given here and a more detailed one can be found in Vapnik’s (1995) work and Schölkopf and Smola (2002). For linearly separable data, a hard margin SVM classifier is a discriminant function that maximizes the geometric margin, which leads to the following constrained optimization problem: In the soft margin SVM (Cortes and Vapnik, 1995), where misclassification and non-linearly separable data are allowed, the problem constraints can be modified to: where ξ ≥ 0 are slack variables that allow an example to be in the margin (0 ≤ ξ ≤ 1), or to be misclassified (ξ > 1). The optimization problem, with an additional term that penalizes misclassification and within margin examples, becomes: The constant C > 0 allows one to control the relative importance of maximizing the margin and minimizing the amount of discriminating boundary and margin slack. This can be represented in a dual formulation in terms of variables αi (Cortes and Vapnik, 1995): The dual formulation leads to an expansion of the weight vector in terms of input data examples: The examples i for which αi > 0 are within the margin and are called support vectors. The discriminant function then becomes: The dual formulation of the optimization problem depends on the data only through dot products. This dot product can be replaced with a non-linear kernel function, k(, j), enabling margin separation in the feature space of the kernel. Using a different kernel, in essence, maps the example points, i, into a new high-dimensional space (with the dimension not necessarily equal to the dimension of the original feature space). The discriminant function becomes: Some commonly used kernels are the polynomial kernel and the Gaussian kernel. In this work we used a linear kernel and a Gaussian kernel, which is also called a radial basis function (RBF): We tuned the value of C using a holdout subset of the respective dataset. Soft margin binary SVM classification was carried out using the Spider Machine Learning environment (Weston et al., 2005) as well as custom scripts run in MATLAB (R2010a; MathWorks, Natick, MA, USA). Multi-class classification was also carried out using the Spider Machine Learning environment (Weston et al., 2005) utilizing an algorithm, developed by Weston and Watkins (1998), that considers all data at once and solves a single optimization problem. With some datasets higher classification accuracies can be obtained with the use of non-linear discriminating boundaries (Ben-Hur and Weston, 2010). Using a different kernel maps the data points into a new high-dimensional space, and in this space the SVM discriminating hyperplane is found. Consequently, in the original space, the discriminating boundary will not be linear. All SVM classification and SVR prediction in this work used a linear kernel or a non-linear RBF kernel. Drucker et al. (1997) extended the SVM method to include SVM regression (SVR) in order to make continuous real-valued predictions. SVR retains some of the main features of SVM classification, but in SVM classification a penalty is observed for misclassified data points, whereas in SVR a penalty is observed for points too far from the regression line in high-dimensional space (Dosenbach et al., 2010). Epsilon-insensitive SVR defines a tube of width ε, which is user defined, around the regression line in high-dimensional space. Any points within this tube carry no loss. In essence, SVR performs linear regression in high-dimensional space using epsilon-insensitive loss. The C parameter in SVR controls the trade-off between how strongly points beyond the epsilon-insensitive tube are penalized and the flatness of the regression line (larger values of C allow the regression line to be less flat) (Dosenbach et al., 2010). SVR predictions described in this work used epsilon-insensitive SVRs carried out in The Spider Machine Learning environment (Weston et al., 2005), as well as custom scripts run in MATLAB (R2010a; MathWorks, Natick, MA, USA). The parameters C and ε were tuned using a holdout subset of the respective dataset.

Cross validation

We used leave-one-out-cross-validation (LOOCV) to estimate the SVM classification and SVR prediction accuracy since it is a method that gives the most unbiased estimate of test error (Hastie et al., 2001). In LOOCV the same dataset can be used for both the training and testing of the classifier. The SVM parameters: C and the number of top features, were tuned using a holdout set with LOOCV. In a round, or fold, of LOOCV, an example from the example set is left out and is used as the entire testing set, while the remaining examples are used as the training set. So each example is left out only once and the number of folds is equal to the number of examples. In our work, LOOCV was performed across participants, not scans, so three scans per participant were removed in each fold and used only in the testing set to avoid “twinning” bias.

T-test and correlation filter

During each SVM LOOCV fold, two-sample t-tests (not assuming equal variance) were run on every feature of the two classes of the training set and the number of features (selected to maximize accuracy) that had the highest absolute t-statistics were selected for use in the classifier. Analogously, during each SVR LOOCV fold, the correlation between each feature and the independent variable (age) was computed, and the features that had the highest absolute correlation values were selected for use in the predictor.

SVM and SVR feature weights

One important aspect of SVM and SVR is the determination of which features in the model are most significant with respect to example classification and prediction. For linear kernel SVM and SVR features, the individual weights of the features as given by the SVM or SVR revealed their relative importance and contribution to the classification or prediction. In the linear kernel SVM and SVR method each node’s (ROI’s) significance, as opposed to each feature’s significance, was directly proportional to the sum of the weights of the connections to and from that node.

Feature and node visualization

Feature connections and nodes were visualized using BrainNet Viewer (Version 1.1).

Parameter tuning

Dosenbach et al. (2010) chose C = 1, top features = 200 for their SVM method and ε = 0.00001, top features = 200 for their SVR method since previous work on a subset of the data revealed that these values provided highest accuracy. Our functional connectivity features used 100 ROIs instead of 160 and this resulted in a different feature space than the one used in the aforementioned study. To tune our SVM parameters for our feature space, we selected a randomly chosen subset, a holdout set, of the respective dataset and chose parameters that maximized classification accuracy and prediction performance for this set. A holdout set of 20 randomly chosen subjects was used to tune the SVM age and gender classification parameters. We limited ourselves to number of features <1000 for two reasons: previous work (Dosenbach et al., 2010) achieved highest accuracy for features on the order of 100, and this order provides a suitable number of features for characterizing the most relevant brain networks. A “grid search” like method (Hsu et al., 2010) was performed for an interval of number of top features ranging from 20 to 300 to output accuracy as a function of the number of top features and C (see Figure 3). The number of features and value of C that maximized accuracy were used in the total dataset SVM method.
Figure 3

A grid search plot of the hold out set linear SVM age classifier accuracy, as a function of the number of top features and . Accuracy peaks at 80% for top features retained = 100 and C = 0.1.

A grid search plot of the hold out set linear SVM age classifier accuracy, as a function of the number of top features and . Accuracy peaks at 80% for top features retained = 100 and C = 0.1. A similar procedure for the SVR method was taken. A holdout set of 25 randomly chosen subjects was used to tune the SVR age prediction parameters. First, slope (of a linear regression line fitting the predicted age) as a function of top features was computed to reveal a peak performance area. Then, slope as a function of the number of features and ε was output with a grid search method. The number of features and value of ε that maximized the slope and R2 were used in the total dataset SVR method, where R2 (in this simple linear regression model) is the squared correlation between the predicted and true age. The slope and R2 of a regression line were chosen as measures of performance since a perfect predictor would produce a regression line of the closer the slope and R2 approached one the better the predictor was considered to be.

Results

Support vector machine

The binary SVM classifier, using a linear kernel, was able to significantly discriminate between young and old subjects with 84% accuracy (p-value < 1 × 10−7, binomial test). Chance performance of the classifier would have yielded an accuracy of 50% (the null hypothesis). Therefore, we treated each fold of the LOOCV as a Bernoulli trial with a success probability of 0.5, as specified by Pereira et al. (2009). The p-value is then calculated using the binomial distribution with n trials (n = number of subjects) and probability of success equal to 0.5 as follows: p-value = Pr(X ≥ number of correct classifications), where X is the binomially distributed random variable. The linear kernel SVM classifier outperformed the RBF kernel SVM classifier with this dataset and a comparison of the two classifiers is given in Table 1. Figure 3 shows how the linear SVM classification accuracy varied with the number of top features retained in the t-test filter as well as how the accuracy varied as a function of the C parameter. The RBF SVM accuracy was 81% with 62 top features retained and C = 1.
Table 1

A comparison of the two kernel classifiers used for age classification.

ClassifierAccuracy (%)Top features retainedC
Linear SVM841000.1
Rbf SVM81621.0

Listed are the accuracy, number of top features retained and the value of .

A comparison of the two kernel classifiers used for age classification. Listed are the accuracy, number of top features retained and the value of . Of the 100 total features retained per fold, 63 were present in every fold and these are called the consensus features. Table 2 lists the consensus features and their relative weights or contributions to the classifier; they are also represented in Figure 4. A summation of all of the weights of the connections from each node was performed and the node weights are listed in Table 3 and represented in Figure 5.
Table 2

List of the 63 consensus features, their node connections and weights for the linear SVM classifier.

Feature indexSVM feature numberROI 1Connected withROI 2Weight
1632L_precentral_gyrus_3L_vent_aPFC0.3119
21037L_sup_frontalR_sup_frontal0.4479
31038M_ACC_2R_sup_frontal0.2472
41047L_basal_ganglia_1R_sup_frontal0.1405
51048M_mFCR_sup_frontal0.203
61231R_pre_SMAM_ACC_10.0986
71233M_SMAM_ACC_10.1508
81727R_vFC_2R_vFC_10.121
91732L_mid_insula_1R_vFC_10.2313
101795M_mFCR_ant_insula0.0542
111950M_mFCL_ant_insula0.1294
122110L_vFC_3L_basal_ganglia_10.1074
132183R_basal_ganglia_1M_mFC0.0652
142301L_post_cingulate_1R_frontal_10.0016
152311R_precuneus_3R_frontal_10.1118
162314R_post_cingulateR_frontal_10.0027
172315L_precuneus_2R_frontal_10.0074
182441R_precuneus_1R_dFC_20.3302
192509L_precuneus_1R_dFC_30.0548
202511R_precuneus_1R_dFC_30.3977
212542M_SMAL_dFC0.1668
222551R_precentral_gyrus_3L_dFC0.029
232605L_basal_ganglia_2L_vFC_20.2421
242606R_basal_ganglia_1L_vFC_20.1719
252618L_precentral_gyrus_2L_vFC_20.1803
262884L_mid_insula_2R_pre_SMA0.0787
272887R_mid_insula_2R_pre_SMA0.0787
282908L_precuneus_1R_pre_SMA0.112
292935M_SMAR_vFC_20.0752
302989R_post_cingulateR_vFC_20.0487
313033L_precuneus_1M_SMA0.1055
323094L_precuneus_1R_frontal_20.0269
333256L_parietal_5L_mid_insula_10.1804
343277R_precuneus_2L_mid_insula_10.0604
353298L_parietal_1L_precentral_gyrus_10.1927
363328L_precuneus_1L_precentral_gyrus_10.0331
373330R_precuneus_1L_precentral_gyrus_10.1669
383357R_precentral_gyrus_3L_parietal_10.1524
393367L_parietal_4L_parietal_10.1008
403368R_parietal_1L_parietal_10.0787
413376R_parietal_3L_parietal_10.021
423379L_parietal_7L_parietal_10.0593
433546L_precuneus_1R_precentral_gyrus_30.0535
443548R_precuneus_1R_precentral_gyrus_30.2019
453598L_precuneus_1L_parietal_20.0234
463835R_parietal_3R_mid_insula_20.2415
473926R_parietal_3L_mid_insula_30.2598
484021L_precuneus_1L_parietal_40.2507
494061L_temporal_2R_parietal_10.1886
504063L_precuneus_1R_parietal_10.0089
514065R_precuneus_1R_parietal_10.1549
524095M_post_cingulateL_parietal_50.241
534104L_precuneus_1L_parietal_50.0656
544249M_post_cingulateR_post_insula0.1736
554299L_post_cingulate_1R_basal_ganglia_20.3015
564311L_post_cingulate_2R_basal_ganglia_20.2509
574334L_post_cingulate_1M_post_cingulate0.3287
584430R_precuneus_1L_post_insula0.1071
594518L_precuneus_1L_post_parietal_10.1153
604602L_IPL_1L_precuneus_10.1964
614683L_IPL_2L_IPL_10.2273
624802L_IPL_3L_parietal_80.379
634812L_angular_gyrus_2L_parietal_80.0522
Figure 4

(A) Shows a bar graph representation of the relative weight of each of the 63 consensus features. (B) Shows a representation of the consensus features revealing location using BrainNet Viewer software. Each connection thickness is proportional to the feature weight.

Table 3

Linear SVM nodes and their weights.

ROI indexROIWeight
7L_vent_aPFC0.1559
12R_sup_frontal0.5193
14M_ACC_10.1247
15L_sup_frontal0.2239
16M_ACC_20.1236
20R_vFC_10.1761
21R_ant_insula0.0271
23L_ant_insula0.0647
25L_basal_ganglia_10.0165
26M_mFC0.0229
27R_frontal_10.0591
29R_dFC_20.1651
30R_dFC_30.1714
31L_dFC0.0979
32L_vFC_20.1169
33L_basal_ganglia_20.1211
34R_basal_ganglia_10.1185
35L_vFC_30.0537
36R_pre_SMA0.072
37R_vFC_20.0473
38M_SMA0.0232
39R_frontal_20.0135
42L_mid_insula_10.0556
43L_precentral_gyrus_10.1963
44L_parietal_10.3025
46L_precentral_gyrus_20.0901
47R_precentral_gyrus_30.1649
48L_parietal_20.0117
50L_mid_insula_20.0394
53R_mid_insula_20.1601
55L_mid_insula_30.1299
57L_parietal_40.1758
58R_parietal_10.0181
59L_parietal_50.0631
60L_precentral_gyrus_30.1559
63R_post_insula0.0868
64R_basal_ganglia_20.2762
65M_post_cingulate0.043
66R_parietal_30.2401
68L_post_insula0.0536
69L_parietal_70.0296
71L_post_parietal_10.0577
72L_temporal_20.0943
74L_precuneus_10.4059
76R_precuneus_10.5722
77L_IPL_10.2118
79L_post_cingulate_10.0128
80R_precuneus_20.0302
83L_parietal_80.2156
86L_IPL_20.1137
88L_IPL_30.1895
89R_precuneus_30.0559
91L_post_cingulate_20.1255
92R_post_cingulate0.023
93L_precuneus_20.0037
98L_angular_gyrus_20.0261

Omitted nodes have a weight of zero.

Figure 5

(A) Shows a bar graph representation of the relative weight or contribution of each node to the classifier. (B) Shows a representation of the weighted nodes revealing location using BrainNet Viewer software. Each node’s size is proportional to its weight.

List of the 63 consensus features, their node connections and weights for the linear SVM classifier. (A) Shows a bar graph representation of the relative weight of each of the 63 consensus features. (B) Shows a representation of the consensus features revealing location using BrainNet Viewer software. Each connection thickness is proportional to the feature weight. Linear SVM nodes and their weights. Omitted nodes have a weight of zero. (A) Shows a bar graph representation of the relative weight or contribution of each node to the classifier. (B) Shows a representation of the weighted nodes revealing location using BrainNet Viewer software. Each node’s size is proportional to its weight. We employed the same SVM method on gender classification as we did for age classification. A linear SVM classifier was not able to significantly discriminate between male and female subjects (55% accuracy, p-value < 0.17, binomial test; compared to 50% for random chance). Also a multi-class linear kernel SVM classifier was applied to 65 subjects partitioned into three age groups: young, middle, and old. It was able to significantly discriminate between the three groups using a linear SVM with 28 top features retained and C = 0.1 (57% accuracy; p-value < 1 × 10−4, binomial test; compared to ∼33% for random chance).

SVR

Seeing that classification of age groups was successful, we decided to test whether age prediction of individuals is viable on a continuous scale with the use of only fcMRI data. That is, given an fMRI connectivity map, we wanted to determine the age in years of the individual on a continuous range rather than choose between two or three discrete classes. A SVR linear predictor (top features retained = 298, ε = 0.1) was applied to 65 subjects varying in age (19–85 years) and was able to predict subject age with a reasonable degree of accuracy, [, p-value < 1 × 10−8 (null hypothesis of no correlation or a slope of zero)], where is a linear regression line applied to the (x, y) points with x being the true age of the subject and y the predicted age (see Figure 6). A similar holdout set method was employed for the SVR predictor as was for the SVM classifiers (see Figures 7 and 8).
Figure 6

(A) Shows a least squares regression line on the predicted and actual age points. (B) Shows the residuals for the least squares regression fit.

Figure 7

Slope as a function of ε and the number of top features retained. The slope peaks at 298 features retained and ε = 0.1.

Figure 8

. R2 peaks at around 298 features retained and ε = 0.1, in the same neighborhood as the peak slope.

(A) Shows a least squares regression line on the predicted and actual age points. (B) Shows the residuals for the least squares regression fit. Slope as a function of ε and the number of top features retained. The slope peaks at 298 features retained and ε = 0.1. . R2 peaks at around 298 features retained and ε = 0.1, in the same neighborhood as the peak slope. The SVR method had 185 features (out of the 298) present in every fold. These consensus features’ weights and the node weights were computed in the same way as for the SVM classifier (see Figures 9 and 10; Tables 4 and 5).
Figure 9

(A) Shows a bar graph representation of the relative weight or contribution of each of the 185 consensus features to the linear kernel SVR predictor. (B) Shows a representation of the 185 consensus features revealing location. Each connection thickness is proportional to the feature weight.

Figure 10

(A) Shows a bar graph representation of the relative weight or contribution of each node to the linear kernel SVR predictor, with ε fixed at 0.1. (B) Shows a representation of the 100 weighted nodes revealing location. Each node’s size is proportional to its weight.

Table 4

A list of the consensus features and their weights for the linear SVR age predictor.

Feature indexSVR feature numberROI 1Connected withROI 2Weight
12006R_aPFC_23M_mPFC3.4199
220814M_ACC_13M_mPFC4.0323
330212R_sup_frontal4L_aPFC_20.1837
430818L_vPFC4L_aPFC_26.7691
551435L_vFC_36R_aPFC_21.9686
651536R_pre_SMA6R_aPFC_23.9004
751738M_SMA6R_aPFC_21.5754
852344L_parietal_16R_aPFC_20.4170
963260L_precentral_gyrus_37L_vent_aPFC2.8031
1078530R_dFC_39R_vlPFC0.5353
1187126M_mFC10R_ACC7.0751
1288136R_pre_SMA10R_ACC7.2001
1391065M_post_cingulate10R_ACC1.5674
1495521R_ant_insula11R_dlPFC_11.6761
1595723L_ant_insula11R_dlPFC_10.0260
1696127R_frontal_111R_dlPFC_14.6158
17103715L_sup_frontal12R_sup_frontal11.408
18103816M_ACC_212R_sup_frontal3.0458
19104422R_dACC12R_sup_frontal3.5010
20104725L_basal_ganglia_112R_sup_frontal6.4695
21104826M_mFC12R_sup_frontal4.8864
22113930R_dFC_313R_vPFC4.3102
23121318L_vPFC14M_ACC_15.4143
24121823L_ant_insula14M_ACC_13.4732
25123136R_pre_SMA14M_ACC_14.6873
26123338M_SMA14M_ACC_12.9394
27123944L_parietal_114M_ACC_10.6638
28126065M_post_cingulate14M_ACC_13.6222
29130626M_mFC15L_sup_frontal1.8433
30138723L_ant_insula16M_ACC_21.2731
31139834R_basal_ganglia_116M_ACC_22.0425
32156031L_dFC18L_vPFC0.7004
33172737R_vFC_220R_vFC_12.9906
34173040R_precentral_gyrus_120R_vFC_10.8001
35173242L_mid_insula_120R_vFC_13.3381
36173949R_mid_insula_120R_vFC_12.0195
37179122R_dACC21R_ant_insula3.8335
38179526M_mFC21R_ant_insula3.2795
39187023L_ant_insula22R_dACC5.1255
40188033L_basal_ganglia_222R_dACC0.8944
41188134R_basal_ganglia_122R_dACC1.3206
42188235L_vFC_322R_dACC2.0233
43188336R_pre_SMA22R_dACC4.7918
44194925L_basal_ganglia_123L_ant_insula2.8737
45195026M_mFC23L_ant_insula1.0806
46195430R_dFC_323L_ant_insula1.6872
47196036R_pre_SMA23L_ant_insula6.0690
48200682R_IPL_123L_ant_insula2.2798
49201692R_post_cingulate23L_ant_insula4.7532
50211035L_vFC_325L_basal_ganglia_13.9079
51211338M_SMA25L_basal_ganglia_12.0904
52217627R_frontal_126M_mFC0.2893
53218233L_basal_ganglia_226M_mFC3.0670
54218334R_basal_ganglia_126M_mFC0.8171
55218435L_vFC_326M_mFC3.9006
56219041L_thalamus_126M_mFC2.0477
57221768L_post_insula26M_mFC12.328
58225230R_dFC_327R_frontal_14.8758
59225836R_pre_SMA27R_frontal_13.5564
60226240R_precentral_gyrus_127R_frontal_12.4206
61226745R_precentral_gyrus_227R_frontal_14.0491
62227149R_mid_insula_127R_frontal_11.7481
63229977L_IPL_127R_frontal_11.6080
64230179L_post_cingulate_127R_frontal_19.5243
65230280R_precuneus_227R_frontal_13.2888
66230482R_IPL_127R_frontal_12.3834
67230886L_IPL_227R_frontal_12.5869
68231189R_precuneus_327R_frontal_11.6131
69231391L_post_cingulate_227R_frontal_14.0015
70231492R_post_cingulate27R_frontal_13.1354
71231593L_precuneus_227R_frontal_11.4905
72231795L_post_cingulate_327R_frontal_11.2658
73234046L_precentral_gyrus_228L_vFC_12.9832
74234349R_mid_insula_128L_vFC_10.9104
75234450L_mid_insula_228L_vFC_11.5884
76237480R_precuneus_228L_vFC_12.1640
77239934R_basal_ganglia_129R_dFC_21.8317
78243974L_precuneus_129R_dFC_25.2682
79244176R_precuneus_129R_dFC_23.9293
80247237R_vFC_230R_dFC_31.2744
81250974L_precuneus_130R_dFC_33.1864
82251176R_precuneus_130R_dFC_310.158
83254036R_pre_SMA31L_dFC0.0347
84254238M_SMA31L_dFC4.5939
85255147R_precentral_gyrus_331L_dFC4.2930
86256157L_parietal_431L_dFC2.0379
87256258R_parietal_131L_dFC0.1465
88257066R_parietal_331L_dFC0.6321
89257369L_parietal_731L_dFC3.8353
90260634R_basal_ganglia_132L_vFC_21.6084
91261745R_precentral_gyrus_232L_vFC_25.5595
92261846L_precentral_gyrus_232L_vFC_26.3606
93262048L_parietal_232L_vFC_25.3524
94280636R_pre_SMA35L_vFC_35.9420
95282959L_parietal_535L_vFC_32.5231
96287642L_mid_insula_136R_pre_SMA2.0429
97288450L_mid_insula_236R_pre_SMA0.8906
98288753R_mid_insula_236R_pre_SMA2.8127
99288955L_mid_insula_336R_pre_SMA0.6196
100290874L_precuneus_136R_pre_SMA3.0879
101293538M_SMA37R_vFC_21.3538
102297780R_precuneus_237R_vFC_22.6368
103298992R_post_cingulate37R_vFC_21.0198
104299295L_post_cingulate_337R_vFC_21.2270
105300142L_mid_insula_138M_SMA0.0421
106300950L_mid_insula_238M_SMA2.5175
107301253R_mid_insula_238M_SMA1.0882
108301354R_temporal_138M_SMA1.5278
109302263R_post_insula38M_SMA0.5381
110303374L_precuneus_138M_SMA1.6784
111309474L_precuneus_139R_frontal_21.1764
112316080R_precuneus_240R_precentral_gyrus_11.1140
113317292R_post_cingulate40R_precentral_gyrus_10.7809
114318142L_mid_insula_141L_thalamus_17.0436
115325558R_parietal_142L_mid_insula_13.3164
116325659L_parietal_542L_mid_insula_13.9536
117326871L_post_parietal_142L_mid_insula_11.1748
118327477L_IPL_142L_mid_insula_10.6810
119327679L_post_cingulate_142L_mid_insula_15.5190
120327780R_precuneus_242L_mid_insula_12.2915
121328992R_post_cingulate42L_mid_insula_12.2737
122329844L_parietal_143L_precentral_gyrus_14.8264
123332066R_parietal_343L_precentral_gyrus_12.3284
124332874L_precuneus_143L_precentral_gyrus_14.3556
125333076R_precuneus_143L_precentral_gyrus_13.8301
126335747R_precentral_gyrus_344L_parietal_11.4419
127336353R_mid_insula_244L_parietal_13.9555
128336656L_parietal_344L_parietal_13.5900
129336757L_parietal_444L_parietal_18.5639
130336858R_parietal_144L_parietal_10.9669
131337262R_parietal_244L_parietal_16.2692
132337666R_parietal_344L_parietal_12.2942
133337767L_parietal_644L_parietal_14.2380
134337969L_parietal_744L_parietal_13.3560
135338676R_precuneus_144L_parietal_12.3440
136352149R_mid_insula_147R_precentral_gyrus_31.7159
137353765M_post_cingulate47R_precentral_gyrus_31.2226
138354270R_temporal_247R_precentral_gyrus_32.9690
139354674L_precuneus_147R_precentral_gyrus_32.8436
140354876R_precuneus_147R_precentral_gyrus_31.9436
141355381R_temporal_347R_precentral_gyrus_30.1955
142359874L_precuneus_148L_parietal_22.8843
143360076R_precuneus_148L_parietal_25.3379
144363358R_parietal_149R_mid_insula_13.0962
145363459L_parietal_549R_mid_insula_11.2744
146368358R_parietal_150L_mid_insula_20.4134
147368459L_parietal_550L_mid_insula_26.6085
148369065M_post_cingulate50L_mid_insula_20.1353
149370580R_precuneus_250L_mid_insula_24.0167
150383566R_parietal_353R_mid_insula_27.7145
151392666R_parietal_355L_mid_insula_30.6183
152397369L_parietal_756L_parietal_30.6484
153402174L_precuneus_157L_parietal_45.2334
154406172L_temporal_258R_parietal_12.2855
155406273L_temporal_358R_parietal_11.2126
156406374L_precuneus_158R_parietal_13.3874
157406576R_precuneus_158R_parietal_10.0311
158409565M_post_cingulate59L_parietal_51.0257
159410474L_precuneus_159L_parietal_56.1957
160424965M_post_cingulate63R_post_insula4.0141
161425369L_parietal_763R_post_insula1.0645
162425571L_post_parietal_163R_post_insula0.5269
163426480R_precuneus_263R_post_insula3.5363
164428666R_parietal_364R_basal_ganglia_22.2027
165429979L_post_cingulate_164R_basal_ganglia_21.9985
166431191L_post_cingulate_264R_basal_ganglia_22.8954
167433479L_post_cingulate_165M_post_cingulate11.855
168433580R_precuneus_265M_post_cingulate0.2271
169440078R_parietal_467L_parietal_61.0522
170443076R_precuneus_168L_post_insula1.6719
171444187L_angular_gyrus_168L_post_insula1.6202
172451672L_temporal_271L_post_parietal_12.2346
173451874L_precuneus_171L_post_parietal_11.3602
174452177L_IPL_171L_post_parietal_10.1952
175453086L_IPL_271L_post_parietal_12.9293
176455280R_precuneus_272L_temporal_21.2155
177460277L_IPL_174L_precuneus_13.9390
178460984L_post_parietal_274L_precuneus_13.9741
179465177L_IPL_176R_precuneus_10.8857
180468386L_IPL_277L_IPL_10.2140
181468689R_precuneus_377L_IPL_13.7542
182475999R_precuneus_480R_precuneus_21.0681
183480288L_IPL_383L_parietal_88.7927
184481298L_angular_gyrus_283L_parietal_82.3099
1854814100L_IPS_283L_parietal_81.5265
Table 5

Nodes and their weights for the linear kernel SVR predictor.

ROI indexROIWeight
3M_mPFC0.3062
4L_aPFC_23.4764
6R_aPFC_21.7403
7L_vent_aPFC1.4016
9R_vlPFC0.2676
10R_ACC0.8462
11R_dlPFC_13.1330
12R_sup_frontal14.747
13R_vPFC2.1551
14M_ACC_11.7177
15L_sup_frontal6.6258
16M_ACC_23.1807
18L_vPFC5.7415
20R_vFC_12.5547
21R_ant_insula4.3946
22R_dACC0.0952
23L_ant_insula10.848
25L_basal_ganglia_13.7628
26M_mFC2.6519
27R_frontal_14.1057
28L_vFC_11.3242
29R_dFC_23.6829
30R_dFC_30.6411
31L_dFC3.3619
32L_vFC_21.4715
33L_basal_ganglia_21.0863
34R_basal_ganglia_10.9506
35L_vFC_31.7332
36R_pre_SMA0.6284
37R_vFC_21.6506
38M_SMA2.2000
39R_frontal_20.5882
40R_precentral_gyrus_10.1372
41L_thalamus_12.4979
42L_mid_insula_12.1402
43L_precentral_gyrus_10.9863
44L_parietal_15.0775
45R_precentral_gyrus_24.8043
46L_precentral_gyrus_24.6719
47R_precentral_gyrus_31.8094
48L_parietal_23.9030
49R_mid_insula_12.0883
50L_mid_insula_26.3615
53R_mid_insula_20.0710
54R_temporal_10.7639
55L_mid_insula_30.6189
56L_parietal_32.1192
57L_parietal_45.8797
58R_parietal_12.3834
59L_parietal_59.7648
60L_precentral_gyrus_31.4016
62R_parietal_23.1346
63R_post_insula0.8258
64R_basal_ganglia_21.3456
65M_post_cingulate6.4323
66R_parietal_35.6008
67L_parietal_61.5929
68L_post_insula6.1384
69L_parietal_73.8037
70R_temporal_21.4845
71L_post_parietal_11.9759
72L_temporal_22.8678
73L_temporal_30.6063
74L_precuneus_15.8246
76R_precuneus_115.035
77L_IPL_14.7624
78R_parietal_40.5261
79L_post_cingulate_12.9254
80R_precuneus_22.2972
81R_temporal_30.0978
82R_IPL_10.0518
83L_parietal_84.7881
84L_post_parietal_21.9870
86L_IPL_22.6511
87L_angular_gyrus_10.8101
88L_IPL_34.3964
89R_precuneus_32.6837
91L_post_cingulate_20.5530
92R_post_cingulate0.4474
93L_precuneus_20.7453
95L_post_cingulate_31.2464
98L_angular_gyrus_21.1549
99R_precuneus_40.5340
100L_IPS_20.7632

Omitted nodes have a weight of zero.

(A) Shows a bar graph representation of the relative weight or contribution of each of the 185 consensus features to the linear kernel SVR predictor. (B) Shows a representation of the 185 consensus features revealing location. Each connection thickness is proportional to the feature weight. (A) Shows a bar graph representation of the relative weight or contribution of each node to the linear kernel SVR predictor, with ε fixed at 0.1. (B) Shows a representation of the 100 weighted nodes revealing location. Each node’s size is proportional to its weight. A list of the consensus features and their weights for the linear SVR age predictor. Nodes and their weights for the linear kernel SVR predictor. Omitted nodes have a weight of zero. To check for agreement with previous studies (see Dosenbach et al., 2010), a SVR predictor using a RBF kernel was applied to our same 65 subject data set. The RBF SVR predictor (top features retained = 15, ε = 0.1) was able to predict age comparable to, but worse than, the linear SVR predictor [RBF SVR: R2 = 0.188, p-value < 1 × 10−3, (null hypothesis of no correlation or zero slope)]. The node weights were computed in the same way as for the linear SVR case (see Figure 11), and the highest weight nodes are listed in Table 6.
Figure 11

Radial basis function kernel SVR node weights. Since the RBF SVR method used 15 top features total, only seven nodes were present as shown in Table 6.

Table 6

Nodes for the RBF SVR predictor.

ROI indexROI
23L_ant_insula
26M_mFC
44L_parietal_1
66R_parietal_3
69L_parietal_7
74L_precuneus_1
77L_IPL_1
Radial basis function kernel SVR node weights. Since the RBF SVR method used 15 top features total, only seven nodes were present as shown in Table 6. Nodes for the RBF SVR predictor. However, we use the linear SVR predictor for feature and node significance output since weights extracted from the linear SVR have a direct proportionality between absolute weight and significance in variable prediction. The same cannot be said about the RBF SVR weights, which are not as readily interpreted.

Discussion

In the present study, we examined the ability of a SVM to classify individuals as either young or old, and to predict age solely on their rs-fMRI data. Our aim was to improve the discriminatory ability and accuracy of the multivariate vector machine method by parameter tuning and feature selection and also output interpretable discriminating features. Support vector machine classification (using temporal correlations between ROIs as input features) of individuals as either children or adults was found 91% accurate in a study by Dosenbach et al. (2010), and our 84% accurate age classifier is in agreement with these results. This shows that a SVM classifier can be successfully applied to rs-fMRI functional connectivity data with appropriate feature selection and parameter tuning. Our linear SVM classifier’s performance was comparable to that of the RBF SVM, and only slightly more accurate. One advantage of the linear SVM classifier over the RBF classifier, used by Dosenbach et al. (2010) for feature interpretation, is that the weights extracted from the linear classifier have a direct relationship between absolute weight and the classifier contribution. The RBF classifier weights are more difficult to interpret. Although age classification was very significant (p-value < 1 × 10−7), gender classification (p-value < 0.17) was not. This could be due to the lack of significant differences between resting male and female functional connectivity. A recent study by Weissman-Fogel et al. (2010) found no significant differences between genders in resting functional connectivity of the brain areas within the executive control, salient, and the default mode networks. The performance of our classifier is consistent with this result and suggests that functional connectivity may not be significantly different between genders. This also provides confirmation that the SVM method classification is specific to aging and not other characteristics in this group of individuals such as gender. We found that the SVM method predicted subject age on a continuous scale with relatively good performance. A perfect predictor has a linear regression fit of that is, for a given age, x, the SVR prediction, y, matches that age exactly, implying a fit with R2 = 1. The closer the slope of the regression line approached one, and the closer the R2 value approached one, the better the performance of the predictor was considered to be. The R2 value is a measure of the proportion of variability of the response variable (predicted age) that is accounted for by the independent variable (true age), so an R2 of 0.419 (linear SVR) reveals that a substantial portion of the variability in the predicted age is accounted for by the subject age. From the linear regression plot (Figure 6) it appears that the younger subjects are overestimated in their predicted age and the older subjects are underestimated in their predicted age. The subjects around age 40–50 are estimated accurately. For this regression fit (the predicted age) ranges from around 30 (when x = 20) to around 80 (when x = 90) so the predicted age range is smaller than the actual age range – this occurrence may be due to similar connectivity maps of ages in a small range (age 25–30 for example). This difficulty in accurately distinguishing subjects within a small age range could suggest non-significant age-related inter-subject differences in functional connectivity of subjects in small adult age ranges. The SVM method allows for detection of the most influential features and nodes which drive the classifier or predictor. We utilized this approach to find the “connectivity hubs,” or nodes with the most significant features that influenced age classification. Tables 3 and 5 reveal the 10 most influential nodes for the linear age SVM classifier and for the linear SVR predictor, respectively. Four out of the 10 most influential nodes are present in both methods: R_precuneus_1, R_sup_frontal, L_precuneus_1, and L_sup_frontal (see Figures 12 and 13). There is a similar degree of agreement between the RBF SVR nodes and the linear SVR nodes: L_precuneus_1, L_parietal_1, R_parietal_3, and L_IPL_1 are in both methods. This agreement between classifier and predictor methods suggests that the connectivity of these nodes provides discriminatory information with respect to age differences with some independence of choice of method.
Figure 12

A comparison of the 10 top consensus features for SVM and SVR. Each connection thickness is proportional to the feature weight. Overlap of features indicates an agreement for both age classification and prediction techniques.

Figure 13

A comparison of the 10 top nodes for SVM and SVR. Each node size is proportional to the feature weight. Overlap of nodes indicates an agreement for both age classification and prediction techniques.

A comparison of the 10 top consensus features for SVM and SVR. Each connection thickness is proportional to the feature weight. Overlap of features indicates an agreement for both age classification and prediction techniques. A comparison of the 10 top nodes for SVM and SVR. Each node size is proportional to the feature weight. Overlap of nodes indicates an agreement for both age classification and prediction techniques. Of note is the difference in distributions of the node weights for the linear SVM and linear SVR methods (Figures 5 and 10). The SVM result seems to have only a few high valued nodes with many quite small valued ones, indicating a more abrupt distribution. The SVR node weight values are distributed more uniformly, with high valued nodes, middle valued, and low valued ones occurring frequently. This could be attributed to the difference in the number of top features retained by the two methods. Since features were projected into their respective nodes and the SVM had 100 features retained while the SVR had 298, the distribution of the SVR node values seemed more uniform. The improvement in accuracy due to the reduction of the dimension of the feature space, in general, reveals that the classification performance is related to the number of features used and the “quality” of the features used. Our work, using the t-test feature filter method for SVM and the correlation feature filter method for SVR as well as the method for parameter selection, shows that SVM classifiers and SVR predictors can achieve high degrees of performance. The growing number of imaging-based binary classification studies of clinical populations (autism, schizophrenia, depression, and attention-deficit hyperactivity disorder) suggests that this is a promising approach for distinguishing disease states from healthy brains on the basis of measurable differences in spontaneous activity (Shen et al., 2010; Zhang and Raichle, 2010). In addition, several recent studies have demonstrated that the rs-fMRI measurements are reproducible and reliable in young and old populations (Shehzad et al., 2009; Thomason et al., 2011; Song et al., 2012) so a brief resting MRI scan could provide valuable information to aid in screening, diagnosis, and prognosis of patients (Saur et al., 2010). Our own work supports the results that rs-fMRI data contain enough information to make multivariate classifications and predictions of subjects. As the amount of available rs-fMRI data increases, multivariate pattern analysis methods will be able to extract more meaningful information which can be used in complement with human clinical diagnoses to improve overall efficacy.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
  28 in total

1.  Cognitive and default-mode resting state networks: do male and female brains "rest" differently?

Authors:  Irit Weissman-Fogel; Massieh Moayedi; Keri S Taylor; Geoff Pope; Karen D Davis
Journal:  Hum Brain Mapp       Date:  2010-11       Impact factor: 5.038

2.  Early functional magnetic resonance imaging activations predict language outcome after stroke.

Authors:  Dorothee Saur; Olaf Ronneberger; Dorothee Kümmerer; Irina Mader; Cornelius Weiller; Stefan Klöppel
Journal:  Brain       Date:  2010-03-18       Impact factor: 13.501

3.  Beyond mind-reading: multi-voxel pattern analysis of fMRI data.

Authors:  Kenneth A Norman; Sean M Polyn; Greg J Detre; James V Haxby
Journal:  Trends Cogn Sci       Date:  2006-08-08       Impact factor: 20.229

4.  Real-time fMRI using brain-state classification.

Authors:  Stephen M LaConte; Scott J Peltier; Xiaoping P Hu
Journal:  Hum Brain Mapp       Date:  2007-10       Impact factor: 5.038

Review 5.  Advances in functional and structural MR image analysis and implementation as FSL.

Authors:  Stephen M Smith; Mark Jenkinson; Mark W Woolrich; Christian F Beckmann; Timothy E J Behrens; Heidi Johansen-Berg; Peter R Bannister; Marilena De Luca; Ivana Drobnjak; David E Flitney; Rami K Niazy; James Saunders; John Vickers; Yongyue Zhang; Nicola De Stefano; J Michael Brady; Paul M Matthews
Journal:  Neuroimage       Date:  2004       Impact factor: 6.556

6.  Reproducibility distinguishes conscious from nonconscious neural representations.

Authors:  Aaron Schurger; Francisco Pereira; Anne Treisman; Jonathan D Cohen
Journal:  Science       Date:  2009-11-12       Impact factor: 47.728

7.  Toward discovery science of human brain function.

Authors:  Bharat B Biswal; Maarten Mennes; Xi-Nian Zuo; Suril Gohel; Clare Kelly; Steve M Smith; Christian F Beckmann; Jonathan S Adelstein; Randy L Buckner; Stan Colcombe; Anne-Marie Dogonowski; Monique Ernst; Damien Fair; Michelle Hampson; Matthew J Hoptman; James S Hyde; Vesa J Kiviniemi; Rolf Kötter; Shi-Jiang Li; Ching-Po Lin; Mark J Lowe; Clare Mackay; David J Madden; Kristoffer H Madsen; Daniel S Margulies; Helen S Mayberg; Katie McMahon; Christopher S Monk; Stewart H Mostofsky; Bonnie J Nagel; James J Pekar; Scott J Peltier; Steven E Petersen; Valentin Riedl; Serge A R B Rombouts; Bart Rypma; Bradley L Schlaggar; Sein Schmidt; Rachael D Seidler; Greg J Siegle; Christian Sorg; Gao-Jun Teng; Juha Veijola; Arno Villringer; Martin Walter; Lihong Wang; Xu-Chu Weng; Susan Whitfield-Gabrieli; Peter Williamson; Christian Windischberger; Yu-Feng Zang; Hong-Ying Zhang; F Xavier Castellanos; Michael P Milham
Journal:  Proc Natl Acad Sci U S A       Date:  2010-02-22       Impact factor: 11.205

8.  Intrinsic functional connectivity as a tool for human connectomics: theory, properties, and optimization.

Authors:  Koene R A Van Dijk; Trey Hedden; Archana Venkataraman; Karleyton C Evans; Sara W Lazar; Randy L Buckner
Journal:  J Neurophysiol       Date:  2009-11-04       Impact factor: 2.714

9.  Reliable intrinsic connectivity networks: test-retest evaluation using ICA and dual regression approach.

Authors:  Xi-Nian Zuo; Clare Kelly; Jonathan S Adelstein; Donald F Klein; F Xavier Castellanos; Michael P Milham
Journal:  Neuroimage       Date:  2009-11-05       Impact factor: 6.556

10.  Age-related differences in test-retest reliability in resting-state brain functional connectivity.

Authors:  Jie Song; Alok S Desphande; Timothy B Meier; Dana L Tudorascu; Svyatoslav Vergun; Veena A Nair; Bharat B Biswal; Mary E Meyerand; Rasmus M Birn; Pierre Bellec; Vivek Prabhakaran
Journal:  PLoS One       Date:  2012-12-05       Impact factor: 3.240

View more
  36 in total

1.  Connectivity between the central nucleus of the amygdala and the bed nucleus of the stria terminalis in the non-human primate: neuronal tract tracing and developmental neuroimaging studies.

Authors:  Jonathan A Oler; Do P M Tromp; Andrew S Fox; Rothem Kovner; Richard J Davidson; Andrew L Alexander; Daniel R McFarlin; Rasmus M Birn; Benjamin E Berg; Danielle M deCampo; Ned H Kalin; Julie L Fudge
Journal:  Brain Struct Funct       Date:  2016-02-23       Impact factor: 3.270

Review 2.  Neuroimaging in Psychiatry and Neurodevelopment: why the emperor has no clothes.

Authors:  Ashley N Anderson; Jace B King; Jeffrey S Anderson
Journal:  Br J Radiol       Date:  2019-03-15       Impact factor: 3.039

3.  Evaluating the Prediction of Brain Maturity From Functional Connectivity After Motion Artifact Denoising.

Authors:  Ashley N Nielsen; Deanna J Greene; Caterina Gratton; Nico U F Dosenbach; Steven E Petersen; Bradley L Schlaggar
Journal:  Cereb Cortex       Date:  2019-06-01       Impact factor: 5.357

4.  Identification of Subclinical Language Deficit Using Machine Learning Classification Based on Poststroke Functional Connectivity Derived from Low Frequency Oscillations.

Authors:  Rosaleena Mohanty; Veena A Nair; Neelima Tellapragada; Leroy M Williams; Theresa J Kang; Vivek Prabhakaran
Journal:  Brain Connect       Date:  2019-02-07

5.  Classification of schizophrenia by intersubject correlation in functional connectome.

Authors:  Gong-Jun Ji; Xingui Chen; Tongjian Bai; Lu Wang; Qiang Wei; Yaxiang Gao; Longxiang Tao; Kongliang He; Dandan Li; Yi Dong; Panpan Hu; Fengqiong Yu; Chunyan Zhu; Yanghua Tian; Yongqiang Yu; Kai Wang
Journal:  Hum Brain Mapp       Date:  2019-01-21       Impact factor: 5.038

6.  On the integrity of functional brain networks in schizophrenia, Parkinson's disease, and advanced age: Evidence from connectivity-based single-subject classification.

Authors:  Rachel N Pläschke; Edna C Cieslik; Veronika I Müller; Felix Hoffstaedter; Anna Plachti; Deepthi P Varikuti; Mareike Goosses; Anne Latz; Svenja Caspers; Christiane Jockwitz; Susanne Moebus; Oliver Gruber; Claudia R Eickhoff; Kathrin Reetz; Julia Heller; Martin Südmeyer; Christian Mathys; Julian Caspers; Christian Grefkes; Tobias Kalenscher; Robert Langner; Simon B Eickhoff
Journal:  Hum Brain Mapp       Date:  2017-09-06       Impact factor: 5.038

7.  Transcallosal connectivity changes from infancy to late adulthood: an ex vivo diffusion spectrum imaging study of macaque brains.

Authors:  Yuguang Meng; Xiaodong Zhang
Journal:  Brain Connect       Date:  2014-12-22

8.  Prediction of brain maturity in infants using machine-learning algorithms.

Authors:  Christopher D Smyser; Nico U F Dosenbach; Tara A Smyser; Abraham Z Snyder; Cynthia E Rogers; Terrie E Inder; Bradley L Schlaggar; Jeffrey J Neil
Journal:  Neuroimage       Date:  2016-05-11       Impact factor: 6.556

9.  Theoretical properties of distance distributions and novel metrics for nearest-neighbor feature selection.

Authors:  Bryan A Dawkins; Trang T Le; Brett A McKinney
Journal:  PLoS One       Date:  2021-02-08       Impact factor: 3.240

10.  BRAIN AGE PREDICTION BASED ON RESTING-STATE FUNCTIONAL CONNECTIVITY PATTERNS USING CONVOLUTIONAL NEURAL NETWORKS.

Authors:  Hongming Li; Theodore D Satterthwaite; Yong Fan
Journal:  Proc IEEE Int Symp Biomed Imaging       Date:  2018-05-24
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.