Literature DB >> 21119772

A Hybrid Machine Learning Method for Fusing fMRI and Genetic Data: Combining both Improves Classification of Schizophrenia.

Honghui Yang1, Jingyu Liu, Jing Sui, Godfrey Pearlson, Vince D Calhoun.   

Abstract

We demonstrate a hybrid machine learning method to classify schizophrenia patients and healthy controls, using functional magnetic resonance imaging (fMRI) and single nucleotide polymorphism (SNP) data. The method consists of four stages: (1) SNPs with the most discriminating information between the healthy controls and schizophrenia patients are selected to construct a support vector machine ensemble (SNP-SVME). (2) Voxels in the fMRI map contributing to classification are selected to build another SVME (Voxel-SVME). (3) Components of fMRI activation obtained with independent component analysis (ICA) are used to construct a single SVM classifier (ICA-SVMC). (4) The above three models are combined into a single module using a majority voting approach to make a final decision (Combined SNP-fMRI). The method was evaluated by a fully validated leave-one-out method using 40 subjects (20 patients and 20 controls). The classification accuracy was: 0.74 for SNP-SVME, 0.82 for Voxel-SVME, 0.83 for ICA-SVMC, and 0.87 for Combined SNP-fMRI. Experimental results show that better classification accuracy was achieved by combining genetic and fMRI data than using either alone, indicating that genetic and brain function representing different, but partially complementary aspects, of schizophrenia etiopathology. This study suggests an effective way to reassess biological classification of individuals with schizophrenia, which is also potentially useful for identifying diagnostically important markers for the disorder.

Entities:  

Keywords:  feature selection; functional magnetic resonance imaging; gene; machine learning; schizophrenia; single nucleotide polymorphisms; support vector machine ensemble

Year:  2010        PMID: 21119772      PMCID: PMC2990459          DOI: 10.3389/fnhum.2010.00192

Source DB:  PubMed          Journal:  Front Hum Neurosci        ISSN: 1662-5161            Impact factor:   3.169


Introduction

Schizophrenia is a severe, chronic, brain disease that disrupts normal thinking, speech, and behavior. Schizophrenia diagnosis currently relies on clinical examination and the illness course, with many subcategories reflecting different aspects of this complex and likely biologically heterogeneous mental disease. Despite the diagnostic reliability achieved by quantifiable examination of overt psychiatric symptoms, researchers have also used biological indices in attempts to classify schizophrenia patients (Murray et al., 1992; Malaspina et al., 1998; Sponheim et al., 2001, 2003). Recently, there have been increasing efforts to utilize brain functional magnetic resonance imaging (fMRI) and examine genetic variation to study potential schizophrenia biomarkers, in order to better understand the pathology of schizophrenia. While most such studies focus on identifying associations between genetics and brain function in schizophrenia, we look at this problem from a different perspective, using biological and genetic information to help classify the disorder. We attempt to improve classification accuracy and provide preliminary data, suggesting that by combining biological and genetic information, we can best reflect the underlying pathophysiology, which ultimately may aid in the diagnosis of schizophrenia and its subcategories. We also predict that by achieving better classification, intrinsic connections between genetic variation and biological function can also be identified. In the last few years there has been a growing interest in the use of machine learning algorithms for analyzing fMRI data. Machine learning algorithms can be used to train classifiers to decode stimuli, behaviors and other variables of interest from fMRI data (Haynes and Rees, 2006; O'Toole et al., 2007; Pereira et al., 2009). Demirci applied a projection pursuit technique to components obtained via independent component analysis (ICA) of fMRI activation maps, to classify individuals as being either schizophrenia patients or healthy controls (Demirci et al., 2008). Shinkareva et al. (2006) presented a unified feature selection and classification procedure to classify subjects into groups based on four dimensional spatio-temporal data. Zhang et al. (2005) applied the adaptive boosting algorithm (AdaBoost) (Freund and Schapire, 1997) to classify subjects into groups (drug-addicted subjects and healthy non-drug-using controls) based on the observed 3D brain images. Ford et al. (2003) used a Fisher linear discriminant analysis on the fMRI brain activation maps to extract spatial characteristics and to classify healthy controls versus patients with schizophrenia, Alzheimer's disease, and mild traumatic brain injury. To date, limited work has been done on the use of genotypic information to help classify patients from controls, although Struyf et al. (2008) demonstrated that SVMs can distinguish bipolar and schizophrenia from normal control with a high accuracy by combing gene expression data with demographic and clinical data. Many researchers now agree that schizophrenia may develop as a result of interplay between genetic predisposition (for example, inheriting certain susceptibility genes) and environmental exposure. While genetic factors play an important role in schizophreniapersons who have immediate relatives with a history of schizophrenia have a significantly increased risk for developing the disorder over that of the general population. However, even monozygotic twins have only about 42% concordance for the disease (Lee et al., 2005). Environmental factors may well lead to subtle brain alterations that increase the risk of schizophrenia. Thus combining fMRI data (which captures brain function presumably reflecting both genetic and environmental influences) with genetic information, is potentially a useful way to help classify schizophrenia (Hariri and Weinberger, 2003; Pearlson and Folley, 2008; Calhoun et al., 2009; Liu et al., 2009; Potkin et al., 2009). In this paper we present a supervised machine learning method to classify schizophrenia and control individuals that incorporates fMRI and SNP data. The method to fuse information from both modalities comprises four stages. At the first stage, a support vector machine based classifier ensemble (SVME) is constructed by using signature SNPs selected from a large SNP pool (SNP-SVME). At the second stage, a SVME is trained with a subset of voxels (Voxel-SVME). At the third stage, fMRI activation components obtained with ICA are used to construct a single SVM classifier (ICA-SVMC). Finally, at the fourth state the results obtained from the above three stages are combined into a single module using majority voting (Combined SNP-fMRI). We will first explain the data collection and preparation procedures, and describe the proposed method in detail. Then, we present the experimental results, followed by discussion and conclusion.

Data and Experiments

Subjects

We investigated fMRI and SNP data from 40 subjects, 20 schizophrenia patients (age: 40.2 ± 9.8, three females) and 20 healthy controls (age 42.5 ± 15.5, eight females). All participants provided written, informed, IRB-approved consent at Hartford hospital. Patients met criteria for DSM-IV-TR schizophrenia based on the structured clinical interview for DSM IV (SCID; First et al., 1995) and review of the case file by a clinician. Healthy subjects were screened to ensure they were free from DSMIV Axis I or Axis II psychopathology assessed using the SCID (Spitzer et al., 1996) and also interviewed to determine that there was no history of psychosis in any first-degree relative. All selected subjects were Caucasian/non-Hispanic. Twenty chronic SZ patients were selected and 16 of them had available, contemporaneous positive and negative syndrome scale (PANSS) scores (Kay et al., 1987). For those 16 SZ patients, PANSS total score was 67.6 ± 30.0 (mean ± SD), positive symptom score 15.4 ± 4.1, and negative symptom score 14.5 ± 6.7. Seventeen SZ patients had available medication information. These were taking 26 types of first and second-generation antipsychotics in variable doses, with most patients taking more than one such drug. The most commonly prescribed medicines included olanzapine, risperidone, quetiapine, haloperidol, divalproex, escitalopram, and aripiprazole.

SNP data collection and preprocessing

A saliva sample was obtained for each subject and DNA extracted. Genotyping was performed using the Illumina BeadArray™ platform and the GoldenGate™ assay (Oliphant et al., 2002; Fan et al., 2003). The PG Array of Genomas Inc. was used (the detailed composition has been published as a patent application, Ruano, 2006). The SNP array consists of 384 SNPs from 222 genes derived from six physiological systems: neurobiology, metabolism, cell proliferation, cardiovascular, inflammation, and cholesterol biochemistry. Over all systems, the following pathways were represented: insulin resistance, glucose metabolism, energy homeostasis, adiposity, apolipoproteins and receptors, fatty acid and cholesterol metabolism, lipases, receptors, cell signaling and transcriptional regulation, growth factors, drug metabolism, blood pressure, vascular signaling, endothelial dysfunction, coagulation and fibrinolysis, vascular inflammation, cytokines, and behavior (satiety). Genotyping analysis software, GenCall, was used to cluster the intensities from the genotyping microarray into three clusters: AA, AB, and BB, without assuming dominant or recessive inheritance. On the basis of the GenCall score, a number between 0 and 1 indicating how close to the center of the cluster a sample lies, we chose a threshold to select only reliable genotype results. SNPs with a GenCall score of 0.25 or higher were selected, resulting in 367 SNPs. Genotypes are inherently categorical and can be represented as discrete numbers, e.g., 1 for one type of homozygous, 0 for heterozygous, and −1 for the other type of homozygous. In our study, each subject has a feature vector with 367 discrete numbers.

fMRI data collection and preprocessing

FMRI data were collected during performance of an auditory oddball task (Kiehl and Liddle, 2003), which consists of detecting an infrequent sound within a series of frequent sounds. The same auditory stimuli were used and found to be effective in eliciting fMRI BOLD patterns differentiating healthy controls from schizophrenia subjects (Kiehl et al., 2005). Auditory stimuli were presented to each participant by a computer stimulus presentation system via earphones. Subjects were presented with three types of sounds: target (1000 Hz with probability p = 0.1), novel (non-repeating random digital noises, p = 0.1), and standard (500 Hz, p = 0.8). Subjects were expected to respond and press a button with their right index finger every time they heard a target stimulus and not to respond to standard or novel sounds. Scans were acquired at the Olin Neuropsychiatry Research Center at the Institute of Living on a Siemens Allegra 3 T dedicated head MRI scanner equipped with 40 mT/m gradients and a standard quadrature head coil. The functional scans were acquired using gradient-echo echo-planar-imaging with the following parameters (repeat time = 1.50 s, echo time = 27 ms, field of view = 24 cm, acquisition matrix = 64 × 64, flip angle = 70°, voxel size = 3.75 × 3.75 × 4 mm3, slice thickness = 4 mm, gap = 1 mm, 29 slices, ascending acquisition). Six “dummy” scans were performed at the beginning to allow for longitudinal equilibrium, after which the paradigm was automatically triggered to start by the scanner. Data were preprocessed using the software package SPM2 (http://www.fil.ion.ucl.ac.uk/spm/). Images were realigned using INRIalign – a motion correction algorithm unbiased by local signal changes (Freire and Mangin, 2001). Data were spatially normalized into the standard Montreal Neurological Institute space (Friston et al., 1995), resliced to 3 × 3 × 3 mm3, and spatially smoothed with a 10 × 10 × 10 mm3 Gaussian kernel. Data for each participant were analyzed by multiple regression incorporating regressors for the novel, target, and standard and their temporal derivatives plus an intercept term. The target-related contrast images were used in this study. Finally, we used a mask based upon one-sample t-test against zero activation to select meaningful voxels. This results in a size of 7,060 voxels in each fMRI image.

Methods

The hybrid machine learning method

A two-class supervised learning problem can be written as a formula, with m samples (subjects in this study). Each sample i has d features and a class label y. From a set of training samples, the machine learning algorithm establishes a classifier, which represents a hypothesis, h. Given unseen samples, the classifier predicts the corresponding y value. An ensemble method constructs a set of classifiers {h1, …, h}, chooses a set of weights {α1, …, α} and build a weighted average classifier H() = α1h1(x) + … αh(x). The classification decision of the combined classifier H is +1 if H(x) > and −1 otherwise. The flowchart of the proposed supervised machine learning method can be seen in Figure 1. There are four stages to fuse fMRI and genetic data, and to classify schizophrenia. The first stage is to select signature SNP loci and construct a SVME for SNPs, termed SNP-SVME. Two steps are involved: (1) to select a subset of candidate SNPs from whole SNPs pool by the forward sequential feature selection method (FSFS) (Liu, 2005); (2) to construct a SVME by a feature selective AdaBoost method (FSA) (Howe, 2003). The second stage is to construct a SVME for fMRI images with the optimal subset of voxels to reach the best classification performance. We first average neighboring voxels to reduce computation complexity, and then construct a SVME using the FSA on the averaged voxels. The third stage is to obtain a SVM classifier using independent components extracted from fMRI activation maps by ICA. The fourth and final stage is to combine the three classification models obtained from above stages into one model using majority voting.
Figure 1

Flow chart of method.

Flow chart of method.

SNPs subset selection and SVME

Classifying schizophrenia based on genetic data is complicated by small-sample-size classification problems (Fukunaga, 1990). Genetic data have high dimensionality compared to the generally small number of available subject samples. The dimensionality N is often considered large if it is in the range of hundreds. Genetic data, however, can have hundreds of thousands of dimensions (genes or loci). Some genes are related to the schizophrenia classification task, but many are presumably irrelevant. The learning algorithms can be potentially confused by the irrelevant/redundant features and construct poor classifiers (Jain and Chandrasekaran, 1982). To address the small-sample-size classification problem, we propose a two-step algorithm to select informative genes from a high-dimensional space and generate a classifier ensemble through SVM. The first step is a filter that removes most irrelevant features and selects a candidate SNP subset from whole SNP pool using FSFS. The second step combines a SNP selection into AdaBoost SVM ensemble algorithm to construct SVME with signature SNP subset.

Forward sequential feature selection (FSFS) method

The FSFS algorithm is a good choice for irrelevancy removal. It applies independent evaluation criteria without involving any learning algorithm. It does not inherit any bias of a learning algorithm and it is also computationally efficient (Liu, 2005). The FSFS algorithm starts the search from an empty SNP set. As the search proceeds, SNPs are added into the SNP subset one at a time. On each round, the best SNP for classification among unselected ones is chosen based on a distance measure. Distance measures are also known as separability, divergence, or discrimination measures. We try to find the SNP that can separate the patients and healthy controls as far as possible. The distance measure used in this paper is Mahalanobis distance. The SNP subset grows until it reaches the full set of original SNPs. A rank list is computed according to how early a SNP is added into the list. Then a certain number of SNPs are selected to construct a candidate SNP subset for second step. Both the prior knowledge of the SNP dataset and experience are used to decide how many SNPs are selected. In order to keep more informative SNPs, we select about top 40% SNPs in the rank list to construct a candidate SNP subset. The candidate SNP subset is much smaller than original SNP set, but still contains unrelated SNPs which need to be removed.

Feature selective adaboost (FSA) method

The second step is constructing a SVME by the FSA method. AdaBoost proposed by Freund and Schapire (1997) can be used in conjunction with any other iterative learning algorithms to improve their performance. Here, we use AdaBoost with SVM to build a SVM classifier ensemble. In addition, we modify AdaBoost to add a feature selection function, then propose a feature selective AdaBoost method. The FSA algorithm aims at training classifiers to get the best performance and selecting features with the best discriminating power simultaneously. The FSA algorithm is given below Given a training set from two classes, including m samples and d features per sample. Initialize weights for the m samples: 1(i) = 1/m. For t = 1:1:T iteration, do -Feature selection Train a classifier for each feature on the weighted samples. Rank each feature based on training error rate of each classifier . Select l features with the lowest , and form a new training dataset -Train a classifier on the weighted samples . -Compute which weighs by its classification performance. -Update and normalize the weighted distribution to be , where t is a normalization factor. Output the final classifier ensemble: As shown above, the FSA algorithm runs for T iterations, and the final classification output of H is a weighted T individual classifiers. Initially, all weights of training samples are set equally. On each round the weights of misclassified samples are increased so that the algorithm forces classifiers to focus on those samples in the training set. Furthermore, within each iteration cycle, the FSA algorithm ranks all features with training error rate, and selects l features with the lowest training error rate. The number l is decided based on the leave one out (LOO) SVMs performance with the weighted training samples used in this iteration. Thus the FSA algorithm selects the feature subset that contains the most discriminating information on each round and trains a classifier based on weighted training samples with the selected features. Accuracy and diversity of individual classifiers critically influence the classification performance of ensemble methods. The FSA increases the diversity among the classifiers by allowing a flexible feature space, which in turn enhances the overall performance of SVME. Valentini and Dietterich (2002) analyzed bias-variance decomposition of the error in SVM, and showed that the bias-variance decomposition offers a rationale to develop ensemble methods using SVMs as base learners. In this paper, the kernel function of SVM is the radial basis function (RBF) kernel. SVM is a statistical learning method based on the structure risk minimization principle that has been shown to be very efficient in pattern recognition applications (Vapnik, 2000). However, the classification performance of SVM heavily depends on a proper setting of parameters. The RBF-SVM has two parameters: one is the RBF kernel parameter σ, and the other is C, which controls the trade-off between training error and the margin. On each round of the FSA algorithm, we compute the optimal parameters of RBF-SVM by evaluating its accuracy and diversity with the weighted training dataset through the bias-variance decomposition of the error in SVM (Valentini and Dietterich, 2002).

Voxels selection and SMVE

The goal of this stage is to select informative voxels to aid in diagnostic classification. As mentioned above, the fMRI image has voxels with 7060 non-zero meaningful voxels. The amount of non-zero voxels is very large compared to the number of samples. It is necessary to decrease the dimensionality while retaining the group discrimination information. First, we merge the 3 × 3 × 3 non-zero neighboring voxels by averaging. Thus the resultant images have 261 large voxels. In the second step, we apply FSA algorithm described in section “SNPs subset selection and SVME” to further select informative voxels and construct SVME. At each FSA iteration, voxels ranked with high discriminative values are used for training a SVM classifier. The final decision is a weighted ensemble of individual classifiers.

ICA component extraction

In prior research, ICA has been applied to the analysis of fMRI data to discover hidden components presenting brain activation and characterize their spatial locations in healthy control subjects and patients with schizophrenia (Calhoun et al., 2004; Sui et al., 2009). The basic ICA model defines a generative model for the observed data, with a goal of identifying hidden independent components from linearly mixed observations. In above equation, is an observation matrix that can be composed of measurements from MRI images. contains the independent components, which consists of unknown sources such as brain activation networks. is a linear mixing matrix, relating the sources to the mixed measurements. is an unmixing matrix. If equals the inverse of , then the , the estimated component matrix, is equivalent to , the source matrix. There are many ICA algorithms based on different independence criteria. The ICA algorithm we use here is the infomax algorithm which attempts to find the matrix through maximizing an entropy function (Bell and Sejnowski, 1995; Cardoso, 1997). And we use modified Akaike information criterion (AIC) method proposed by Li et al. to estimate the correct number of components (Akaike, 1974; Li et al., 2007). At this stage, there are five components extracted from the fMRI image of each sample. These five components are used as classification features to train a linear SVM classifier.

Classification combination

The fourth and final stage combines the results from the above three stages and makes a final decision via majority voting.

Classification Experiments and Results

We next applied the hybrid machine learning method to the problem of separating patients from controls. All statistical results of our experiments are based on the LOO cross-validation method. Thirty-nine subjects were used for training, while one subject was used for testing. A total of 40 training-testing sets were implemented. The performance measures used in this paper are specificity, sensitivity, and accuracy. The test output of our method can be positive (patient) and negative (control). A true positive means a patient correctly diagnosed as a patient, a false positive means healthy people wrongly identified as sick. True negative means healthy people correctly identified as healthy. A false negative means sick people wrongly identified as healthy. The specificity, sensitivity and accuracy are defined as below:

Results

Taking into account of the stochastic property of the algorithm, we performed the algorithm 20 times for each pair of training-testing dataset and the average classification results in each stage and final combined classification model are shown in Table 1. For comparison, we also trained the SVMC with all 367 SNPs and 7060 Voxels. The LOO accuracy are 0.4 (367 SNPs) and 0.675 (7060 Voxels). The experiment results suggest that SNP and voxel selection is necessary.
Table 1

Performance of the classification model.

Measures of performance
SensitivitySpecificityAccuracy
CLASSIFICATION MODEL
SVMC with all 367 SNPs0.40000.40000.4000
SVMC with all 7060 Voxels0.65000.70000.6750
THE PROPOSED CLASSIFICATION MODEL
SNP-SVME0.71750.76000.7388
Voxel-SVME0.78750.84500.8163
ICA-SVMC0.80000.85000.8250
Combination0.85750.88750.8725
Performance of the classification model. At the first stage, we examined the SNPs database using the two-step method described in section “The hybrid machine learning method”. After the most irrelevant SNPs filtered out from whole SNPs dataset using FSFS, 150 SNPs were selected. These 150 SNPs were then used as input features of the FSA algorithm. The number of iterations for FSA was set to 20 empirically since the performance was saturated after 20 classifiers. At each iteration the algorithm selected a certain number of SNPs from 150 SNPs and trained a SVM classifier. The number of SNPs selected in each iteration was estimated by the LOO algorithm on weighted training dataset. Those SNPs having more discrimination information are expected to have a high frequency of being selected. The importance of each SNP to the classification task can be denoted by the ratio of the number of times each SNPs selected over the number of iterations of FSA. Figure 2 shows the importance of individual SNP, and the most important 15 SNPs are listed in Table 2.
Figure 2

Importance of individual SNP.

Table 2

Top 15 SNPs.

SNPGene
rs6136SELP: selectin P (granule membrane protein 140 kDa, antigen CD62)
rs737865COMT: catechol-O-methyltransferase
rs7072137GAD2: glutamic acid decarboxylase 2
rs1176744HTR3B: 5-hydroxytryptamine (serotonin) receptor 3B
rs821616DISC1: disrupted in schizophrenia 1
rs11188092CYP2C19: cytochrome P450, family 2, subfamily C, polypeptide 19
rs3771892TNFAIP6: tumor necrosis factor alpha-induced protein 6
rs1128503ABCB1: ATP-binding cassette, sub-family B (MDR/TAP), member 1
rs2066470MTHFR: 5,10-methylenetetrahydrofolate reductase (NADPH)
rs2020933SLC6A4: solute carrier family 6 (neurotransmitter transporter, serotonin), member 4
rs2192752IL1R1: interleukin 1 receptor, type I
rs2298122DRD1IP: dopamine receptor D1 interacting protein
rs2276307HTR3B: 5-hydroxytryptamine (serotonin) receptor 3B
rs3758947ABCC8: ATP-binding cassette, sub-family C, member 8
rs11212515ACAT1: acetyl-coenzyme A acetyl transferase 1
Importance of individual SNP. Top 15 SNPs. At the second stage, the FSA selected a certain number of voxels that containing the most discriminating information from 261 large voxels and trained a SVM at each iteration. The number of voxels to be selected at each iteration was estimated by the LOO algorithm with weighted training dataset used in that iteration. The importance of each voxel to the classification task can be denoted by the ratio of the number of times each voxel selected over the number of iterations of FSA. Figure 3 shows the location of selected voxels in the brain and their importance. The volume of each region represents the importance of voxels. Yellow indicates the highly important region, followed by orange and red. Table 3 lists the anatomical brain regions of selected voxels.
Figure 3

The location of selected voxels.

Table 3

The detail region of selected voxels.

AreaBroadmann areaL/R volume (cc)L/R importance: value (x,y,z)
Postcentral gyrus: 3: 5: 2: 70.7/0.61 (−24,−29,71)/1 (18,−34,71)
Precentral gyrus: 4: 6: 44: 90.9/1.01 (−12,−29,71)/1 (18,−29,71)
Paracentral lobule: 6: 4: 5: 310.2/0.21 (0,−34,71)/1 (6,−29,71)
Cingulate gyrus: 31: 32: 241.8/1.60.341 (−6,−42,44)/0.341 (12,−42,44)
Superior parietal lobule: 70.3/0.20.341 (−30,−47,44)/0.341 (30,−53,44)
Inferior parietal lobule: 400.6/0.40.341 (−30,−42,44)/0.341 (48,−42,44)
Precuneus: 7: 311.0/0.90.341 (0,−42,44)/0.341 (30,−42,44)
Medial frontal gyrus: 11: 32: 10: 6: 91.1/1.10.268 (−6,49,−15)/0.268 (6,49,−15)
Superior temporal gyrus: 38: 22:*: 41: 421.0/1.00.268 (−48,20,−14)/0.268 (48,20,−14)
Middle frontal gyrus: 11: 10: 47: 6: 46: 93.9/2.30.268 (−42,49,−15)/0.268 (24,37,−14)
Inferior frontal gyrus: 47: 11:*: 46: 45: 9: 133.2/1.90.268 (−36,14,−8)/0.268 (42,20,−14)
Superior frontal gyrus: 11: 10: 91.0/0.60.268 (−18,60,−16)/0.268 (18,60,−16)
Anterior cingulate: 32: 240.2/0.30.036 (−6,39,23)/0.121 (12,43,−10)
Middle temporal gyrus: 22: 19: 21: 200.8/0.60.024 (−53,−32,4)/0.024 (65,−32,4)
Caudate:0.1/0.20.024 (−36,−32,4)/0.024 (36,−32,4)
Transverse temporal gyrus: 42: 410.2/0.30.012 (−59,−14,9)/0.012 (59,−14,9)
Posterior cingulate: 31:*: 300.2/0.10.012 (−30,−60,17)/0.012 (30,−66,17)
Insula: 130.1/0.20.012 (−30,27,18)/0.073 (30,26,1)
Cuneus: 19: 18: 30: 17: 230.8/0.70.012 (−18,−89,24)/0.012 (12,−95,24)
The location of selected voxels. The detail region of selected voxels.

Discussion

Classification results

In the method described, three kinds of classification information were extracted from genetic and fMRI data in order to classify schizophrenia and healthy control subjects using three models: SNP-SVME, Voxel-SVME, and ICA-SVMC. Among them, Voxel-SVME and ICA-SVMC both extract information from fMRI data, while only SNP-SVME extracts classification information from SNP data. FMRI data have more weight than SNP data in the proposed method for two reasons: (a) fMRI images contain more discriminating information than SNP data, due to the fact that brain function is logically closer to the expression of mental illness symptoms and as expected the fMRI classification models performed better than SNP classification model in our experiments; (b) although Voxel-SVME and ICA-SVMC models are both constructed with fMRI data, the two models present discriminating information from different perspectives. This does not imply that the SNP classification model is unnecessary. In fact, when the two fMRI models disagree with each other, the decision of SNP-SVME is especially important because this model makes the decision based on a totally different data source. A necessary and sufficient condition for an ensemble of classifiers to be more accurate than any of its individual members is that the classifiers are accurate (better than random guessing) and the errors are at least somewhat uncorrelated (Dietterich, 2000). The proposed method meets the requirement by constructing individual classification models from different perspectives and different data source. From data shown in Table 1, we know that the proposed four-stage method achieves better classification accuracy by combining genetic data and fMRI data than using either alone. The results indicate that even though abnormal brain function and genetic variation are both related to a clinical diagnosis of schizophrenia, they reflect different aspects of schizophrenia etiopathology, and cannot replace each other in terms of reflecting the disease. Overall, 87% accuracy was achieved, suggesting that combining genetic and brain functional information best represents the majority of symptomatic information used currently to arrive at a clinical diagnosis. For misclassified cases, many reasons may be involved including the small size of the SNP array, the rather simple and non-specific brain activation patterns reflected in the auditory stimuli paradigm and the sub-optimal sensitivity of the model. One observation worthy of note is that two patients were consistently misclassified by all classification models. This may be due to inaccuracy in all models, or the fact that there is discrepancy between biological/genetic and clinical interview-based diagnosis. The schizophrenia patients used in this study were chronic and all taking antipsychotic medication. Aware of the potential effects of such medication on brain function, we assume that these drugs had a common, general effect on all 20 patients, since most patients were using multiple medicines (1–5 types of medicines, and a total of 26 types of medicines were prescribed) at various dosages. This study is a proof-of-concept with a small sample size and limited numbers of SNPs, to demonstrate the power of combining genetics with brain function applied in the classification framework. For a full validation, the proposed method will need to be applied to a much larger group of subjects, including multiple SZ subcategories (including schizo-affective disorder), multiple clinical treatment group (including current-naïve subjects), and using more SNPs. Future work will also focus on early differentiation of sub-groups (which in the case of prodromal subjects can take weeks to months), prediction of treatment response, or early diagnosis at the time of first presentation.

Gene selection

As shown in Table 2, the top 15 SNPs ranked by the proposed method were located in 14 genes: Among them, some are well-known putative schizophrenia susceptibility genes, such as COMT (Handoko et al., 2005; Shifman et al., 2006; Nicodemus et al., 2007), DISC1 (St Clair et al., 1990; Hodgkinson et al., 2004; Cannon et al., 2005; Callicott et al., 2005; Nicodemus et al., 2007; Saetre et al., 2008; Liu et al., 2009), MTHFR (Godfrey et al., 1990; Zintzaras, 2006; Gilbody et al., 2007; Jönsson et al., 2008; Roffman et al., 2008), and HTR3B (Maziade et al., 1995; Levinson et al., 1998; Gurling et al., 2001; Frank et al., 2004; Yamada et al., 2006). Some are brain related genes including GAD2 (De et al., 2004; Arai et al., 2009), SLC6A4 (Shi et al., 2008; Zaboli et al., 2008), and ABCB1 and ABCC (Bozina et al., 2008), possible candidates for schizophrenia susceptibility.

Voxel selection

From Table 3, the brain regions contributing most to classify schizophrenia patients and healthy controls consist of inferior, middle and medial frontal gyri, cingulate gyrus, superior temporal gyrus, and precuneus. Since our input fMRI data were contrast images (target stimulus vs. standard stimulus) collected in the auditory oddball test, it is reasonable that voxels in these regions were selected. The results are in accordance with previous structural and functional brain findings (Barta et al., 1990; Pearlson et al., 1996; Calhoun et al., 2004; Cavanna and Trimble, 2006; Garrity et al., 2007).

Conclusion

We propose a hybrid machine learning method for fusing fMRI and genetic data to separate individuals with schizophrenia from healthy controls. Experimental results showed that better classification accuracy is achieved by combining genetic and fMRI data than using either alone, suggesting that genetic and brain function represent different aspects, partially complementary to each other, of schizophrenia beauty of pathology. The method is able to extract the discriminating information to classify schizophrenia effectively and is potentially useful to identify important diagnostic markers for schizophrenia. Given the limited sample size and relatively small SNP array, this manuscript presents preliminary results and we are now in the process of attempting to replicate these findings in an independent sample.

Conflict of Interest Statement

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
  53 in total

1.  Reproducibility of the hemodynamic response to auditory oddball stimuli: a six-week test-retest study.

Authors:  Kent A Kiehl; Peter F Liddle
Journal:  Hum Brain Mapp       Date:  2003-01       Impact factor: 5.038

2.  Highly parallel SNP genotyping.

Authors:  J B Fan; A Oliphant; R Shen; B G Kermani; F Garcia; K L Gunderson; M Hansen; F Steemers; S L Butler; P Deloukas; L Galver; S Hunt; C McBride; M Bibikova; T Rubano; J Chen; E Wickham; D Doucet; W Chang; D Campbell; B Zhang; S Kruglyak; D Bentley; J Haas; P Rigault; L Zhou; J Stuelpnagel; M S Chee
Journal:  Cold Spring Harb Symp Quant Biol       Date:  2003

3.  Classification of functional brain images with a spatio-temporal dissimilarity map.

Authors:  Svetlana V Shinkareva; Hernando C Ombao; Bradley P Sutton; Aprajita Mohanty; Gregory A Miller
Journal:  Neuroimage       Date:  2006-08-14       Impact factor: 6.556

Review 4.  Decoding mental states from brain activity in humans.

Authors:  John-Dylan Haynes; Geraint Rees
Journal:  Nat Rev Neurosci       Date:  2006-07       Impact factor: 34.870

5.  Association of DISC1/TRAX haplotypes with schizophrenia, reduced prefrontal gray matter, and impaired short- and long-term memory.

Authors:  Tyrone D Cannon; William Hennah; Theo G M van Erp; Paul M Thompson; Jouko Lonnqvist; Matti Huttunen; Timothy Gasperoni; Annamari Tuulio-Henriksson; Tia Pirkola; Arthur W Toga; Jaakko Kaprio; John Mazziotta; Leena Peltonen
Journal:  Arch Gen Psychiatry       Date:  2005-11

6.  Auditory hallucinations and smaller superior temporal gyral volume in schizophrenia.

Authors:  P E Barta; G D Pearlson; R E Powers; S S Richards; L E Tune
Journal:  Am J Psychiatry       Date:  1990-11       Impact factor: 18.112

7.  An information-maximization approach to blind separation and blind deconvolution.

Authors:  A J Bell; T J Sejnowski
Journal:  Neural Comput       Date:  1995-11       Impact factor: 2.026

8.  A complete genetic association scan of the 22q11 deletion region and functional evidence reveal an association between DGCR2 and schizophrenia.

Authors:  Sagiv Shifman; Anat Levit; Mao-Liang Chen; Chia-Hsiang Chen; Michal Bronstein; Avraham Weizman; Benjamin Yakir; Ruth Navon; Ariel Darvasi
Journal:  Hum Genet       Date:  2006-06-17       Impact factor: 4.132

9.  Genetic associations with schizophrenia: meta-analyses of 12 candidate genes.

Authors:  Jiajun Shi; Elliot S Gershon; Chunyu Liu
Journal:  Schizophr Res       Date:  2008-08-20       Impact factor: 4.939

10.  A method for accurate group difference detection by constraining the mixing coefficients in an ICA framework.

Authors:  Jing Sui; Tülay Adali; Godfrey D Pearlson; Vincent P Clark; Vince D Calhoun
Journal:  Hum Brain Mapp       Date:  2009-09       Impact factor: 5.038

View more
  70 in total

1.  Alternating Diffusion Map Based Fusion of Multimodal Brain Connectivity Networks for IQ Prediction.

Authors:  Li Xiao; Julia M Stephen; Tony W Wilson; Vince D Calhoun; Yu-Ping Wang
Journal:  IEEE Trans Biomed Eng       Date:  2018-11-29       Impact factor: 4.538

Review 2.  A review of feature reduction techniques in neuroimaging.

Authors:  Benson Mwangi; Tian Siva Tian; Jair C Soares
Journal:  Neuroinformatics       Date:  2014-04

3.  Individualized prediction of schizophrenia based on the whole-brain pattern of altered white matter tract integrity.

Authors:  Yu-Jen Chen; Chih-Min Liu; Yung-Chin Hsu; Yu-Chun Lo; Tzung-Jeng Hwang; Hai-Gwo Hwu; Yi-Tin Lin; Wen-Yih Isaac Tseng
Journal:  Hum Brain Mapp       Date:  2017-10-28       Impact factor: 5.038

4.  Classification of schizophrenia using feature-based morphometry.

Authors:  U Castellani; E Rossato; V Murino; M Bellani; G Rambaldelli; C Perlini; L Tomelleri; M Tansella; P Brambilla
Journal:  J Neural Transm (Vienna)       Date:  2011-09-09       Impact factor: 3.575

Review 5.  Neuroimaging in Psychiatry and Neurodevelopment: why the emperor has no clothes.

Authors:  Ashley N Anderson; Jace B King; Jeffrey S Anderson
Journal:  Br J Radiol       Date:  2019-03-15       Impact factor: 3.039

Review 6.  Emerging Global Initiatives in Neurogenetics: The Enhancing Neuroimaging Genetics through Meta-analysis (ENIGMA) Consortium.

Authors:  Carrie E Bearden; Paul M Thompson
Journal:  Neuron       Date:  2017-04-19       Impact factor: 17.173

7.  Morning-evening variation in human brain metabolism and memory circuits.

Authors:  B J Shannon; R A Dosenbach; Y Su; A G Vlassenko; L J Larson-Prior; T S Nolan; A Z Snyder; M E Raichle
Journal:  J Neurophysiol       Date:  2012-11-28       Impact factor: 2.714

8.  Applying tensor-based morphometry to parametric surfaces can improve MRI-based disease diagnosis.

Authors:  Yalin Wang; Lei Yuan; Jie Shi; Alexander Greve; Jieping Ye; Arthur W Toga; Allan L Reiss; Paul M Thompson
Journal:  Neuroimage       Date:  2013-02-20       Impact factor: 6.556

Review 9.  [Neuroimaging in psychiatry: multivariate analysis techniques for diagnosis and prognosis].

Authors:  J Kambeitz; N Koutsouleris
Journal:  Nervenarzt       Date:  2014-06       Impact factor: 1.214

10.  Manifold regularized multitask feature learning for multimodality disease classification.

Authors:  Biao Jie; Daoqiang Zhang; Bo Cheng; Dinggang Shen
Journal:  Hum Brain Mapp       Date:  2014-10-03       Impact factor: 5.038

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.