Literature DB >> 29236759

Collaborative representation-based classification of microarray gene expression data.

Lizhen Shen1, Hua Jiang2, Mingfang He1, Guoqing Liu2.   

Abstract

Microarray technology is important to simultaneously express multiple genes over a number of time points. Multiple classifier models, such as sparse representation (SR)-based method, have been developed to classify microarray gene expression data. These methods allocate the gene data points to different clusters. In this paper, we propose a novel collaborative representation (CR)-based classification with regularized least square to classify gene data. First, the CR codes a testing sample as a sparse linear combination of all training samples and then classifies the testing sample by evaluating which class leads to the minimum representation error. This CR-based classification approach is remarkably less complex than traditional classification methods but leads to very competitive classification results. In addition, compressive sensing approach is adopted to project the high-dimensional gene expression dataset to a lower-dimensional space which nearly contains the whole information. This compression without loss is beneficial to reduce the computational load. Experiments to detect subtypes of diseases, such as leukemia and autism spectrum disorders, are performed by analyzing the gene expression. The results show that the proposed CR-based algorithm exhibits significantly higher stability and accuracy than the traditional classifiers, such as support vector machine algorithm.

Entities:  

Mesh:

Year:  2017        PMID: 29236759      PMCID: PMC5728509          DOI: 10.1371/journal.pone.0189533

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

The development of microarray technology facilitates the collection of information containing expression values of multiple genes under different experimental conditions. This technology also promotes and affects the progress in biological and biomedical research [1-3]. Therefore, huge amounts of genome-wide expression data have been remarkably acquired, and the quantity of data is continuously growing. These data provides the opportunity to better understand the tissues being studied and explore a finer molecular distinction between health states. Thus, approaches based on machine learning, which can automatically acquire qualitatively interesting patterns from gene data, have been widely adopted [4-7]. Among these machine learning pathways, support vector machine (SVM) and K-nearest neighbor (KNN) are used to study performance [8-11]. To facilitate a more flexible and comprehensive analysis, Pse-in-one and Pse-analysis have been proposed. These methods are considered powerful bioinformatics analysis tools based on web server and Python package, respectively [12, 13]. These tools can generate any desired pseudo components or feature vectors for protein/peptide and DNA/RNA sequences according to the need of users’ in their studies. In [14], various neural network technologies for cancer classification were surveyed, and the results are beneficial to exploit the most cost-effective approaches for clinicians. Zheng et al. [15] used independent component analysis to refine a subset of genes to further improve the clustering performance of nonnegative matrix factorization. However, gene data are always in high-dimensional space because of the enormous numbers of measurements (e.g., genes and probes). The high-dimensional characteristic limits the applicability of the majority of conventional classifier models. This characteristic also leads to the poor performance of conventional models in the identifying diseases from genome-wide expression data. Moreover, parameters should be optimized to facilitate the implementation of the algorithm depending on the structure of the data set. Sparse representation is well-known as a powerful tool in various applications and has been highly developed for signal processing and machine learning [16]. Sparse representation-based classification (SRC) was introduced to identify gene expression without requiring any parameter optimization [17]. The SRC scheme has been successfully applied in the diagnosis of microarray gene expression in cancer. In [18], SRC was applied and showed better performance than the state-of-the art methods in protein fold recognition. In the SRC scheme, a microarray gene expression y can be expressed as y = Φα by sparse representation over a dictionary Φ, while α is a sparse vector. Assuming that l0-norm indicates the number of non-zeros in α, the sparsity of α can be represented by l0-norm. Thus, α can be evaluated with the criteria of l0-norm minimization. However, the solution of l0-norm minimization is NP-hard, which is very time-consuming. Consequently, l1-norm minimization is considered as the alternative to l0-norm minimization because of its closest convex function of the former. The former minimization process is widely equipped with in sparse coding as follows: min‖α‖1 s.t. ‖y − Φα‖2 ≤ ε, where ε is a small constant. Although the computational complexity of l1-norm minimization is much lower than that of l0-norm minimization, the applications of l1-norm minimization is still limited by the high complexity in real-time scenarios. Therefore, various algorithms have been presented to accelerate l1-norm minimization. Another issue is that the dimension of the feature space may be too high for classification algorithm. Numerous algorithms may become invalid or infeasible when lots of feature data in a dimension have to be processed [19-21]. A successful method is to extract a small number of discriminative information from a high-dimensional space. That is, high-dimensional data can be mapped to a feature space with lower dimension to facilitate the design of a classification algorithm. The algorithm of feature extraction is effective but highly complex. Compressive sensing (CS) theory is a well-known solution for sparse signals in sampling theory [22]. The main result of CS, which was introduced by Candés and Tao [23] and Donoho [24], is that a length-N signal x, that is, K-sparse in some basis can be recovered exactly in the polynomial from just M = O(K log(N/K)) linear measurements of the signal. Using the CS approach, high-dimensional feature data can be simply projected to a space with much lower dimension with a random sensing matrix. The low-dimensional data retain enough information and can recover the original high-dimensional features. In this paper, we employ a measurement matrix which is assumed to be very sparse to efficiently compress the feature from the gene expression dataset. CS facilitates the extraction of the feature data to a low-dimensional one in the compressed domain. To improve the sparse representation with computationally expensive l1-norm minimization, this paper proposes a collaborative representation (CR)-based classification (CRC) to subtype gene expression datasets. CR collaboratively uses whole training samples from all classes to constitute the query sample y. CRC with regularized least square can achieve competitive classification results with considerably less complexity. Our works are beneficial in augmenting the study of sparsity-based genome-wide data pattern classification. We used benchmark cancer, autism spectrum disorder (ASD), and brain data sets, in the experiments. To measure the effectiveness of the CRC algorithm, accuracy in terms of prediction or error rate in classifying a selected gene subset is evaluated.

Materials and methods

In this study, the classification problem can be depicted as follows: The microarray gene data sets, which serve as sample to train, are classified for the diagnosis and prognosis of various types of diseases. To apply sparse representation model, the coding vector of y should be sparse. Moreover, coding should be performed collaboratively over the whole dataset instead of each subset. By identifying the sparsest representation of y over X, the subset is naturally discriminative.

Collaborative representation-based classification

Classification of gene expression datasets using algorithms has been well studied for the robust diagnosis and prognosis of various diseases to achieve high prediction accuracy [25]. The primary goal of this approach is to identify the important pathways or genes strongly associated with the clinical outcomes of interest. Sparse representation has been applied to classify genomic data [26, 27]. The corresponding performance was tested by distinguishing two subtypes of leukemia by analyzing microarray gene expression. With the selected leukemia genes from the 7129 ones, SRC achieved a classification accuracy of 97.4% when performed with the leave-one-out (LOO) method. When the tests were imposed on the same datasets, compared with existing methods, such as weighted vote, SVM, sparse logistic regression method, and rough sets method, SRC shows potential advantages with respect to improved classification rates with fewer informative genes. SRC also helps reduce computational load in terms of computational complexity and memory storage when processing high-dimensional data. Detecting subtypes of diseases, such as leukemia, according to different genetic markups is important. SRC is favorable for personalized therapy and improving treatment. According to the theory of sparse representation, a complete dictionary of atoms, denoted by , can accurately represents any signal by linearly combining these atoms in Φ. Nevertheless, if Φ is orthogonal, the representation of x is required with many atoms from Φ, leading to a complex computation. Then, the orthogonality of Φ is relaxed to allow less atoms to represent x. That is, more atoms is required to be involved in Φ to allow more choices to represent x. The dictionary Φ with redundant atoms is referred to as an over-complete matrix, leading to a sparser representation of signal x. Such sparse representation has been utilized in image restoration, in which great success has been achieved. A total number of K classes of targets are assumed. Let denote the dataset of the i class, X = [X1, X2, ⋯, X] and the column of X is a sample belonging to class i. The existence of a microarray gene data is assumed, which can be coded as y = Xα + w, where w is a vector indicating the residual, α = [α1; ⋯; α; ⋯; α] and α denotes the coding vector corresponding to class i. If y belongs to the ith class, w is a zero vector, then y = Xα holds. Only the coefficients in α are significant when most entries in α, j ≠ i are nearly zeros. That is, vector α is sparse, and its non-zero entries are used to encode the identity of sample y. Table 1 shows the matrix measurements of the microarray gene expression data for samples of prostate tumors and adjacent prostate tissue without tumor. Assuming that the training samples consist of these samples, the dictionary of samples is as follows: A sample to be identified is treated as the measurement output, and α is to be reconstructed.
Table 1

Gene expression data.

G1G2G3G4G5G6GnLabel
X19-11.42.70.64.32837.3Normal
X2-117036-614Cancer
X3-1-10-13026Normal
X4-20-1-26325Cancer
X5-9-1900769-21Normal
X6000-22021Cancer
Microarray gene expression datasets, such as breast cancer dataset, usually suffer from high dimensionality. In traditional SRC, enough training samples for each class is required for dictionary X to be over-complete. However, the sample size of the gene expression data is very small compared to its high dimensionality, leading to the under-complete matrix X. Consequently, the representation error would be unacceptable, even when y belongs to class i. This characteristic leads to a failure decision made by the classifier. One obvious solution is to use more samples of class i to represent y. However, collecting huge number of samples is expensive. Nevertheless, differences among genes are very small such that the diverse classes of gene expression data share similarities. Thus, the testing gene y can be coded collaboratively by the dictionary of all samples X = [X1, X2, ⋯, X] under the l1-norm sparsity constraint. When y is collaboratively represented with all classes containing all samples, y can be classified individually by SRC by checking class by class [28]. The sparse solution to y = Xα can be determined by solving the following optimization problem: However, solving l0-optimization is an NP hard problem. Optimizations of l0-optimization and l1-optimization have recently been proved to be equivalent when α is sparse enough. In general, the sparse representation problem can be expressed as follows: We use sparsity constraint of l2-norm instead of that of l1-norm, Thus. the problem becomes as follows: If X is over-complete, is computed by an algorithm such as OMP algorithm. Then, y is perpendicularly projected onto the space spanned by X, which can be expressed as follows: In SRC, a new sample y can be classfied by evaluating the reconstruction representation errors: where designates the coding coefficient vector drawn according to class i. When the number of classes is too large, a stable solution of the least square cannot be obtained. To overcome this limitation, CRC is proposed to collaboratively represent the query sample using X. The corresponding l2-minimization with regularized least square is as follows: Assuming P = (XX + λI)−1 X, this formula shows that P is independent of y. Hence, P is only required to be computed once and can be shared in classifying different samples. The reduction in the complexity of CRC is not achieved at the expense of performance. Similar to SRC, is used to classify the type of gene expression data. The representation residual is the main criteria for classification, where is the coding vector associated with class i. However, the l2-norm is also beneficial in providing some information about the class features of gene expression data and can be combined for classification. Such combination is useful to slightly improve the classification accuracy compared with that using only the representation residual.

Compressive sensing-based dimensionality reduction

Gene expression datasets may have very high dimensionality. High-dimensional gene expression datasets always result in high computational load and degradation of model performance of classification algorithm [10, 19, 21]. Therefore, feature selection approach or dimensionality reduction methods should eliminate redundant and irrelevant feature data to decrease the ratio of features to samples. On the other hand, this process reduces the probability of overfitting. A common method of dimensionality reduction in high-dimensional classification is choosing some data from a dataset in an order and dropping the other data. This approach would inevitably result in the loss of some information. Many probes in traditional microarrays are inactive during detection. That is, the microarray gene expression data can be sparse. Only a few significant entries of gene expression data matrix may be of interest. This phenomenon suggests the fast transformation of a microarray along with the CS measurement process, where each measurement y is a linear combination of entries in the microarray gene expression data vector x. A number of N gene expression data is assumed in every sample, but that at most K samples are collected, with K ≪ N. In random projection, a sensing matrix R with rows having unit length projects data from the high-dimensional feature to a lower-dimensional space as follows: where M ≪ N. Each projection v is essentially identical to a compressive measurement during CS encoding. Therefore, using the CS principle, the number of data to be classified can be remarkably lower than the number of the original microarray gene data. With fewer data, the classification algorithm can also be more efficient. We refer to this reduction of microarray dimensionality as a CS dimensionality reduction. Ideally, R can offer a stable embedding that approximately retains the important information in any K-sparse signal when is projected to . Therefore, the so-called restricted isometry property (RIP) is derived to approximately maintain distances between any pair of K-sparse signals. Note that every signal pair shares the same K basis. That is, for any two K-sparse vectors x1 and x2 sharing the same K basis, Incoherence is achieved if the sparse signal satisfies the RIP condition. For example, random matrices with entries identically and independently drawn from a standard normal distribution, Bernoulli distributions, or Fourier matrix satisfy the RIP condition. However, given that the matrix is dense, the loads in terms of memory and computational complexity become unacceptable when N is large. The matrix which satisfies the RIP condition can be directly obtained using the Johnson-Lindenstrauss lemma. In this paper, we use a very sparse random measurement matrix with entries defined as follows: This type of matrix with ρ = 1 or 3 satisfies the Johnson-Lindenstrauss lemma [29]. This matrix is easy to compute and requires only a uniform random generator. In the latter approach, the microarray readouts are linear combinations of input gene expression data components. Thus, the readouts can be expressed in the form given by Eq 8. With a reduced number of measurements, we are able not just to detect but also classify the target gene expression datasets. The proposed CRC with CS dimensionality reduction algorithm is summarized in Algorithm 1. Algorithm 1 CRC Algorithm Input: The gene expression sample datasets D = [D1, D2, ⋯, D] and the test dataset y. Output: The classification result of s. 1: The dimensionality of X and y is reduced using the sensing matrix R in Eq 10. The new lower-dimensional versions of D and y are written as X and y, respectively. 2: The columns of X are normalized with l2-norm criteria. 3: y is coded over X by where P = (XX + λI)−1X 4: The regularized residuals are computed. 5: The identity of y is determinated as Identity(y) = arg min r

Results and discussion

To evaluate the classification performance, we performed experiments on two benchmark datasets and compared the proposed model with state-of-the-art models. The simulations were conducted on a system with i7-4790 CPU with 3.60 GHz processor and 8 GB RAM. The algorithms were evaluated based on accuracy to determine the rating prediction accuracy as follows: where TP denotes true positive (item is true and classified truly), TN is true negative (item is true but not classified truly), FP is false positive (item is false but classified truly), and FN is false negative (item is false and not classified truly). The performance of the classifiers is quantified by LOO cross-validation (LOOCV) and 10-fold cross validation (10-fold-CV). In the LOOCV scheme, each sample in the dataset was predicted by building a model from the remaining samples and recording the accuracy of each model. In 10-fold-CV, the dataset was randomly divided into 10 equally sized subsets. Nine of these subsets were used in the model construction, while the remaining subset was used for prediction.

Datasets

In this study, we use gene data for leukemia and ASD to evaluate the performance of CRC. The leukemia data set was downloaded from an open resource on the website of Gene Pattern in Broad Institute. The original training data set consisted of 38 bone marrow samples (27 ALL and 11 AML), while the testing data set consisted of 35 bone marrow samples (21 ALL and 14 AML) [14]. A total of 7129 gene expression samples were tested. The brain gene data was collected by a consortium consisting of Allen Institute for Brain Science and five collaborating universities [30-32]. Using the GENCODE consortium’s release [33], the expression data were assembled and aligned in the form of RNA-sequencing reads. The data were in the units of reads per kilobase of transcript per million mapped reads (RPKM). Therefore, A log2(RPKM+1) transformation operation was imposed on these data. The dataset had 524 samples with a developmental time point range from 8 weeks post-conception to 40 years of age from 26 brain structures. To our knowledge, this brain dataset is currently the most comprehensive transcriptome of the human developing brain. In the training dataset, the expression values for the temporospatial time points acted as features. We also considered the diffuse large B-cell lymphoma (DLBCL) [34] and breast cancer, which has high dimension. The DLBCL data set consisted of 58 samples from DLBCL patients and 19 samples from follicular lymphoma with 7070 genes. The gene expression profiles were organized using Affymetrix human oligonucleotide arrays. The gene dataset of breast cancer has 14 docetaxel-resistant samples and 10 docetaxel-sensitive samples. The cDNA microarrays comprised 12625 genes. All datasets are downloaded from http://www.biolab.si/supp/bi-cancer/projections/index.html (Table 2), and have been preprocessed by t-test with a 0.05 confidence level.
Table 2

Gene data sets used in this study.

Data setSamplesGenesClasses
Leukemia7271292
DLBCL7770702
breast24126252
ASD21285242

Dimensionality reduction results

The computational load can be reduced, and overfitting can be avoided by reducing the dimensionality for a dataset. CS is used to project the high-dimensional data to a lower-dimensional space. The much lower bound for M is sufficient to achieve good results for gene data classification. The average LOOCV accuracy under the incremental dimension reduced by CS is shown in Figs 1–4. Fig 1 shows that in leukemia, when the reduced dimensionality is more than 500, the average accuracy would not increase. That is, 500 is an optimal choice for CS dimensionality reduction of leukemia gene data. Similarly, Figs 2 and 3 show that the optimal reduced dimensionalities for breast cancer and DLBCT are 200 and 600, respectively. By contrast, Fig 4 depicts the performance of ASD always remains stable even when the dimensionality is reduced to 3. Thus, the data in the gene datasets of this disease may be highly coherent. These coherent data may be sparse in the other form. Compared with the original dimensionality, CS dimensionality reduction is helpful to speed up the classification algorithm.
Fig 1

The average accuracy of CRC for leukemia dataset with reduced dimensionality.

Fig 4

The average accuracy of CRC for ASD dataset with reduced dimensionality.

Fig 2

The average accuracy of CRC for breast cancer dataset with reduced dimensionality.

Fig 3

The average accuracy of CRC for DLBCL dataset with reduced dimensionality.

We also compared the running time of CRC and SRC with fast l1-minimization methods. We fixed the reduced dimensionality. In one LOOCV loop, the average running speeds of CRC and SRC are listed in Table 3.
Table 3

Running time of classification on the microarray database.

Data setSRCCRCBest reduced dimension
Leukemia0.550.56s500
DLBCL0.670.68s600
Breast Cancer0.40.4s200
ASD3.50.52s3
The improved speed of CRC is obvious in the ASD database. However, in leukemia, DLBCL, and breast cancer datasets, the matrix X is overdetermined. SRC directly uses pseudoinverse to solve the l1 minimization problem. When more samples are used for training, SRC has to use the other fast l1 minimization methods, such as l_1_ls and homotopy. In such case, CRC is 7 times faster than SRC.

Performance comparison

Several state-of-art classifier models, including SRC, KNN [35], support vector machines (SVM) [36], and random forest [37], were adopted when performances were compared in terms of averaged accuracy. The procedure was repeated 100 times, and their averaged performances were calculated for all tested samples to enhance the quantification of the performance of the learning model. The numerical results of the LOOCV, 10-fold CV, and 5-fold CV are summarized in Tables 4–6. For comparison, we also considered the small round blue cell tumors (SRBCT) which have multiple classes. Four different SRBCTs are used in this dataset, namely, Ewing family tumor (EWS), Burkitt lymphoma (BL), neuroblastoma (NB), and rhabdomysarcoma (RMS). The training set contained 63 samples, and the test set contained 20 samples. The cDNA microarrays consisted of 2308 genes.
Table 4

Leave-one-out cross validation (CV) classification results.

Data setCRCSRCKNNSVMRandom forest
Leukemia96.5%96.4%84.7%91.7%90.0%
DLBCL97.6%95.5%89.6%84.4%89.6%
Breast Cancer79.2%78.1%74.4%78.1%68.1%
ASD83.8%82.1%68.8%71.4%67.7%
SRBCT98.9%98.9%97.8%84.3%92.8%
Table 6

5-fold CV classification results.

Data setCRCSRCKNNSVMRandom forest
Leukemia87.5%86.2%80.2%83.5%86.5%
DLBCL83.1%80.5%80.0%79.5%78.9%
Breast Cancer70.8%68.1%68.4%68.5%67.1%
ASD82.6%80.9%70.0%70.4%66.8%
SRBCT73.6%73.6%70.6%72.6%73.1%
Tables 4 and 5 shows that CRC outperformed the other classifiers using all validation methods. This approach has an average accuracies of 96.4% and 94.5% using LOOCV and 10-fold-CV, respectively. The accuracies of SRC and CRC are quite close and at least 7% higher than those of the other three methods. Table 6 shows the performances in terms of 5-fold CV. All averaged accuracies are reduced compared with that of 10-fold CV. This phenomenon is due to the less samples used to train. Note that the performance of ASD was reduced slightly, because the total samples of ASD gene data were far more than the other gene datasets. The proposed method yielded a slightly higher average accuracy, implying that CR remarkably contributed to the classification. Considering these studies, CRC is the most robust method, because it showed the highest accuracy with the least number of genes not only in the test set but also using other types of validations, including 10-fold-CV and LOOCV.
Table 5

10-fold CV classification results.

Data setCRCSRCKNNSVMRandom forest
Leukemia94.5%94.1%85.5%91.6%93.5%
DLBCL94.5%89.5%89.6%84.4%89.6%
Breast Cancer74.6%73.1%74.4%73.1%69.1%
ASD82.8%81.1%70.8%70.4%68.7%
SRBCT91.6%91.6%89.6%84.8%90.3%

Conclusion

This paper presented a new classifier model for classifying microarray gene expression. The proposed classifier uniquely incorporates CS dimensionality reduction CS to guide data classification. This work has two important contributions. First, an effective CRC approach was implemented. This method exhibited very high performance and valuable insight into the different types of cancer data sets. CRC did not require the optimization of parameters to facilitate classification. This method can be used for different types of gene expression data with multiple classes without any modifications. Second, CS method was adopted to reduce the dimensionality of data. The sensing matrix is a very sparse random matrix with entries as 0 and . This method avoided the various feature extraction methods that are computationally expensive. The proposed method can be introduced for clinical application for each patient. Accurate diagnostics can be provided by only measuring a few gene expression data. We tested our algorithm on publicly available data sets of several diseases, including leukemia, breast cancer, ASD, DLBCL, and SRBCT. In conclusion, CRC can achieve high classification accuracy and fast computational speed.

The code (Matlab) and data files along with instructions are provided as a zip file.

(ZIP) Click here for additional data file.
  30 in total

1.  Diffuse large B-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning.

Authors:  Margaret A Shipp; Ken N Ross; Pablo Tamayo; Andrew P Weng; Jeffery L Kutok; Ricardo C T Aguiar; Michelle Gaasenbeek; Michael Angelo; Michael Reich; Geraldine S Pinkus; Tane S Ray; Margaret A Koval; Kim W Last; Andrew Norton; T Andrew Lister; Jill Mesirov; Donna S Neuberg; Eric S Lander; Jon C Aster; Todd R Golub
Journal:  Nat Med       Date:  2002-01       Impact factor: 53.440

2.  A compressed sensing based approach for subtyping of leukemia from gene expression data.

Authors:  Wenlong Tang; Hongbao Cao; Junbo Duan; Yu-Ping Wang
Journal:  J Bioinform Comput Biol       Date:  2011-10       Impact factor: 1.122

Review 3.  Machine learning in bioinformatics: a brief survey and recommendations for practitioners.

Authors:  Harish Bhaskar; David C Hoyle; Sameer Singh
Journal:  Comput Biol Med       Date:  2005-10-13       Impact factor: 4.589

4.  Compressive sensing DNA microarrays.

Authors:  Wei Dai; Mona A Sheikh; Olgica Milenkovic; Richard G Baraniuk
Journal:  EURASIP J Bioinform Syst Biol       Date:  2009-01-13

Review 5.  A systematic review of molecular imaging (PET and SPECT) in autism spectrum disorder: current state and future research opportunities.

Authors:  Nicole R Zürcher; Anisha Bhanot; Christopher J McDougle; Jacob M Hooker
Journal:  Neurosci Biobehav Rev       Date:  2015-02-12       Impact factor: 8.989

6.  Multiclass cancer diagnosis using tumor gene expression signatures.

Authors:  S Ramaswamy; P Tamayo; R Rifkin; S Mukherjee; C H Yeang; M Angelo; C Ladd; M Reich; E Latulippe; J P Mesirov; T Poggio; W Gerald; M Loda; E S Lander; T R Golub
Journal:  Proc Natl Acad Sci U S A       Date:  2001-12-11       Impact factor: 11.205

7.  Optimal combination of feature selection and classification via local hyperplane based learning strategy.

Authors:  Xiaoping Cheng; Hongmin Cai; Yue Zhang; Bo Xu; Weifeng Su
Journal:  BMC Bioinformatics       Date:  2015-07-10       Impact factor: 3.169

8.  The effect of gender on the neuroanatomy of children with autism spectrum disorders: a support vector machine case-control study.

Authors:  Alessandra Retico; Alessia Giuliano; Raffaella Tancredi; Angela Cosenza; Fabio Apicella; Antonio Narzisi; Laura Biagi; Michela Tosetti; Filippo Muratori; Sara Calderoni
Journal:  Mol Autism       Date:  2016-01-19       Impact factor: 7.509

9.  An anatomically comprehensive atlas of the adult human brain transcriptome.

Authors:  Michael J Hawrylycz; Ed S Lein; Angela L Guillozet-Bongaarts; Elaine H Shen; Lydia Ng; Jeremy A Miller; Louie N van de Lagemaat; Kimberly A Smith; Amanda Ebbert; Zackery L Riley; Chris Abajian; Christian F Beckmann; Amy Bernard; Darren Bertagnolli; Andrew F Boe; Preston M Cartagena; M Mallar Chakravarty; Mike Chapin; Jimmy Chong; Rachel A Dalley; Barry David Daly; Chinh Dang; Suvro Datta; Nick Dee; Tim A Dolbeare; Vance Faber; David Feng; David R Fowler; Jeff Goldy; Benjamin W Gregor; Zeb Haradon; David R Haynor; John G Hohmann; Steve Horvath; Robert E Howard; Andreas Jeromin; Jayson M Jochim; Marty Kinnunen; Christopher Lau; Evan T Lazarz; Changkyu Lee; Tracy A Lemon; Ling Li; Yang Li; John A Morris; Caroline C Overly; Patrick D Parker; Sheana E Parry; Melissa Reding; Joshua J Royall; Jay Schulkin; Pedro Adolfo Sequeira; Clifford R Slaughterbeck; Simon C Smith; Andy J Sodt; Susan M Sunkin; Beryl E Swanson; Marquis P Vawter; Derric Williams; Paul Wohnoutka; H Ronald Zielke; Daniel H Geschwind; Patrick R Hof; Stephen M Smith; Christof Koch; Seth G N Grant; Allan R Jones
Journal:  Nature       Date:  2012-09-20       Impact factor: 49.962

10.  Large-scale machine learning for metagenomics sequence classification.

Authors:  Kévin Vervier; Pierre Mahé; Maud Tournoud; Jean-Baptiste Veyrieras; Jean-Philippe Vert
Journal:  Bioinformatics       Date:  2015-11-20       Impact factor: 6.937

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.