Literature DB >> 29844511

Cancer Characteristic Gene Selection via Sample Learning Based on Deep Sparse Filtering.

Jian Liu1, Yuhu Cheng1, Xuesong Wang2, Lin Zhang1, Z Jane Wang3.   

Abstract

Identification of characteristic genes associated with specific biological processes of different cancers could provide insights into the underlying cancer genetics and cancer prognostic assessment. It is of critical importance to select such characteristic genes effectively. In this paper, a novel unsupervised characteristic gene selection method based on sample learning and sparse filtering, Sample Learning based on Deep Sparse Filtering (SLDSF), is proposed. With sample learning, the proposed SLDSF can better represent the gene expression level by the transformed sample space. Most unsupervised characteristic gene selection methods did not consider deep structures, while a multilayer structure may learn more meaningful representations than a single layer, therefore deep sparse filtering is investigated here to implement sample learning in the proposed SLDSF. Experimental studies on several microarray and RNA-Seq datasets demonstrate that the proposed SLDSF is more effective than several representative characteristic gene selection methods (e.g., RGNMF, GNMF, RPCA and PMD) for selecting cancer characteristic genes.

Entities:  

Mesh:

Substances:

Year:  2018        PMID: 29844511      PMCID: PMC5974408          DOI: 10.1038/s41598-018-26666-0

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Introduction

Cancer is related to abnormal cell growth which has the potential to invade or spread to other parts of the human body. Currently there are more than 100 types of known cancers that are very detrimental for humans. According to the World Health Organization’s World Cancer Report 2014, about 14.1 million new cases of cancer emerged globally (excluding non-melanoma skin cancer). It caused about 8.2 million deaths, accounting for 14.6% of all human deaths[1]. In the United States, the average five-year survival rate for cancer is 66%[2]. Genetically, genes that regulate cell growth and differentiation could be altered to develop a normal cell into a cancer cell. These genes can usually be divided into two broad categories: oncogenes which promote cell growth and reproduction, and suppressor genes which inhibit cell division and survival[3]. In contemporary molecular biology, it remains a challenge to accurately identify such genes relevant to key cellular processes. The advances of DNA microarray and deep sequencing technologies have made it possible for biologists to measure expression levels of thousands of genes simultaneously[4,5]. These genes can be detected more comprehensively and more detailed than ever before. However, in each gene expression dataset, the number of genes is so huge (thousands or even more than 10,000) that it is extremely difficult to analyze the whole set of gene expression data. Fortunately, for an exact biological process, only a small set of genes may take part in the regulation of gene expression level[6,7]. Such a small set of genes usually are referred as characteristic genes. Identification of the characteristic genes associated with special biological processes of different types of cancers could provide important insights into the underlying genetics and prognostic assessment of cancer. Therefore, effective identification of such characteristic genes has been an important research topic, which technically is closely related to feature selection. Recently, deep learning, originally proposed by Hinton et al.[8] to learn a multiple hierarchical network by training[9], has drawn increasing attention. With the obtained deep non-linear network, deep learning can provide a complex function approximation. Numerous deep learning methods were proposed for different learning tasks, such as feature learning, classification, and recognition. The most commonly used models include deep belief networks (DBNs)[8], stacked auto-encoders (SAEs)[10], and convolutional neural networks (CNNs)[11]. These models have been successfully applied to numerous fields (e.g., image processing, natural language processing, and medical data analytics) and achieved promising performances. Particularly, they have been used to analyze gene expression data. For example, SAE was successfully applied to enhance cancer diagnosis and classification based on gene expression data by Fakoor et al.[12]. Liu et al. proposed the sample expansion based 1-dimensional CNN for classifying tumor gene expression data[13]. However, training DBN, SAE and CNN models is often time-consuming and labor expensive, since a large number of hyperparameters need to be tuned. Sparse filtering, an unsupervised feature learning algorithm, works by optimizing the sparsity of the feature distribution and it is essentially hyperparameter-free. Since the critical idea of sparse filtering is to avoid explicit modeling of the data distribution, this can give rise to a simple formulation and permits learning effectively. Furthermore, sparse filtering can be extended into multi-layer networks. Deep sparse filtering can be used to learn meaningful features in additional layers by using greedy layer-wise stacking[14]. Therefore, in this paper, we employ deep sparse filtering to select characteristic genes. Several deep learning methods have been explored to select cancer genes. Danaee et al. used stacked denoising autoencoder (SDAE) to detect breast cancer and identify relevant genes[15]. In their work, firstly, SDAE is used to extract functional features from gene expression profiles. Then, the performance of the extracted representation is evaluated through supervised classification models. Lastly, a set of highly interactive genes are identified by analyzing the SDAE connectivity matrices. Ibrahim et al. selected multi-level gene/miRNA by using DBN and active learning to enhance the classification accuracy[16]. The major steps of the approach are described as follows: (1) Use DBN to extract high level representations of the gene expression profiles; (2) Apply a feature selection method to rank genes; (3) Obtain the finally selected genes using active learning. Both SDAE[15] and DBN[16] are supervised methods, and can learn high level features of the gene expression data. Feature learning maps a high-dimensional feature space of the original data into a low-dimensional space so that the data can be better represented by the transformed feature space. Since each feature in the gene expression data represents a gene, if we employ traditional feature learning methods, the original feature space will be changed and we cannot specify the exact genes in the new feature space. Therefore traditional feature learning is not applicable to characteristic gene selection. In addition, since gene expression datasets generally are with high dimensional features and small sample size, SDAE and DBN suffer from serious overfitting when applied to gene expression data. Moreover, SDAE and DBN perform poorly when the unlabeled data is abundant while the labeled data is scarce, which is exactly our case. Considering the limited labelled data in our problem, unsupervised learning is more suitable. To address the above concerns, different from previous feature learning methods, we propose the idea of sample learning, an unsupervised method, for selecting genes with deep learning models. Sample learning transforms the sample space of gene expression data and ensures that the features (or genes) can be better represented by the transformed sample space so that we can specify the exact characteristic genes from the transformed sample space. In this paper, by combining sample learning and deep sparse filter, a novel unsupervised characteristic gene selection method, which is named as Sample Learning based on Deep Sparse Filtering (SLDSF), is proposed for cancer characteristic gene selection. In the proposed method, firstly, the idea of sample learning for selecting characteristic genes is presented. Then the applicability of sample learning using sparse filtering is explained. Finally, the deep sparse filtering framework is extended by using the feed-forward network. Our later tests on gene expression datasets demonstrate that cancer characteristic genes can be effectively selected using the proposed SLDSF. The remainder of the paper is structured as follows. In Section 2, the proposed SLDSF for selecting cancer characteristic genes is presented. When compared the proposed SLDSF with four unsupervised methods: RGNMF, GNMF, RPCA and PMD, experimental results on several cancer gene expression datasets are reported in Section 3. In Section 4, the conclusions are given.

Methods

Sparse Filtering

Sparse filtering[14], an unsupervised feature learning method, is easy to implement with only one hyperparameter. It optimizes the sparsity of the feature distribution. The main idea of sparse filtering is to avoid explicit modeling of the data distribution by a simple formulation and thus permits effective learning. Denote a gene expression dataset as , where each row represents a feature and each column represents a sample. Denote as the feature distribution matrix over . The entry in represents the activity of the i-th feature on the j-th sample. By imposing sparse constraints on , a matrix can be obtained which satisfies . And each column in can be viewed as a sparse filter. Sparse filtering involves three steps: normalizing by rows, then normalizing by columns and finally summing up the absolute values of all elements. Denote as the i-th row of and as the j-th column of . To be specific, each feature of is divided by the L2-norm across all samples: ,which normalizes each feature to be equally active. Then, each sample is divided by the L2-norm across all features: to make all samples lie on the unit L2-ball. Finally, all the normalized elements are optimized for sparseness by using the L1-norm. Therefore the objective function of sparse filtering can be expressed as follows: The sparse filtering is implemented by the L-BFGS method, a commonly used iterative algorithm for solving unconstrained nonlinear optimization problems[17]. In the objective function Eq. (1), the feature distribution has shown population sparsity, high dispersal as well as lifetime sparsity, which have been investigated in[18,19].

Population sparsity

Population sparsity means that each sample should have a few active (non-zero) features. The term in Eq. (1) reflects this characteristic of the features on the j-th sample. Because is constrained to lie on the unit L2-ball, the objective function can be minimized when the features are sparse.

High dispersal

High dispersal means that the distribution should have similar statistics for different features. Specifically, the considered statistics are the mean squared activations of each feature by averaging the squared values in the feature matrix across the samples. For all features, the statistics should be roughly the same, suggesting that the contributions of all features should be roughly same. In the first step of sparse filtering, each feature of is divided by the L2-norm across all samples, , to normalize each feature to be equally active.

Lifetime sparsity

Lifetime sparsity means each feature should be active in a few samples, which ensures that the features should be discriminative enough to distinguish samples. Concretely, a few active (non-zero) elements should be included in each row of the feature distribution matrix. In the objective function of sparse filtering, the characteristic of lifetime sparsity is guaranteed by population sparsity and high dispersal. Due to the population sparsity, many non-zero elements can be obtained in the feature distribution matrix. These zero elements are roughly evenly distributed across all features due to high dispersal. Accordingly, each feature would have a great number of non-zero elements and thus be lifetime sparse.

Sample Learning for Characteristic Gene Selection

Traditionally, feature learning algorithms usually transform the feature space to achieve dimensionality reduction. To be more specific, a high-dimensional feature space of the original data is mapped into a low-dimensional feature space by using feature learning methods which maintain the distance information between samples. In other words, feature learning is a process of representing the samples in the low dimensional feature space which is obtained by using some mapping or rescaling methods. Feature learning can be used for classification tasks by transforming the feature space to achieve the desired results. However, direct feature learning is not applicable for characteristic gene selection. In our problem, since each feature represents a gene, if we use feature learning methods to process the gene expression data, the original feature space will be changed and we cannot identify the exact genes in the new feature space. In order to explain this problem intuitively, a common feature learning model is shown in Fig. 1(a).
Figure 1

The differences between sample learning and feature learning. (a) A feature learning model for the lung cancer dataset. (b) A sample learning model for the lung cancer dataset.

The differences between sample learning and feature learning. (a) A feature learning model for the lung cancer dataset. (b) A sample learning model for the lung cancer dataset. The lung cancer dataset, which contains 12600 genes on 203 samples, is taken as an example, where each row represents a gene (some names of genes are provided in Fig. 1(a)) and each column represents a sample. After being processed by feature learning methods, the feature space of the lung cancer dataset is changed and we cannot locate the exact genes in the transformed feature space. In this paper, our goal is to find a group of characteristic genes associated with special biological processes of different cancers which may illuminate the underlying genetics and contribute to the prognostic assessment. Obviously, without knowing the exact genes in the transformed feature space, our goal cannot be achieved. Therefore, direct feature learning is not preferred for characteristic gene selection. To address this problem, sample learning is proposed to analyze gene expression data in the proposed method. Compared to feature learning, sample learning transforms the sample space. The illustration of a sample learning model for the lung cancer dataset is shown in Fig. 1(b). After being processed by sample learning, the feature space of the lung cancer dataset remains unchanged while the sample space is transformed. In this case, the information of each gene can be better represented by the transformed sample space. Then we can select characteristic genes through some feature selection strategies from the processed matrix in Fig. 1(b). In short, sample learning is a process that the features are represented by a transformed sample space which is obtained via some mapping or rescaling algorithms.

Applicability Analysis of Sample Learning Using Sparse Filtering

In the subsection above, the idea of sample learning was introduced for cancer characteristic gene selection. Particularly, we adopt sparse filtering for sample learning. As mentioned above, the feature learning objective function in Eq. (1) makes the feature distribution have three desirable characteristics. Similarly, sample learning also provides these characteristics of the sample distribution. Suppose there is a sample distribution matrix over a gene expression dataset, where each row is a sample, each column is a gene, and the elements are the activities of samples on specific genes. A detailed explanation of how sample learning satisfies the three desirable characteristics of the sample distribution is as follows: Population sparsity requires that each gene should have a few non-zero samples. Specifically, for each gene (one column) in the sample distribution matrix, only a small number of non-zero entries are required. These non-zero entries represent this gene is differentially expressed on the non-zero samples. This indicates that one gene is usually impossible differentially expressed on all samples. The cancer characteristic genes can be selected according to these differentially expressed genes. Lifetime sparsity requires each sample should be active on a few genes which ensure that the samples should be discriminative enough to distinguish genes. In a gene expression dataset, each sample has the expression levels of all genes, but only a small number of genes are differentially expressed on each sample. Since our purpose is to select differentially expressed genes, the samples are discriminative enough to distinguish genes. Here, the non-zero entries in each sample can be represented as the differentially expressed genes and the zero entries are represented as the non-differentially expressed genes. Therefore, each sample in the sample distribution matrix should allow limited non-zero entries. High dispersal requires that the distribution should have similar statistics on different samples which suggest that the contributions of all samples should be roughly same. This property prevents the same samples are always active and guarantees the extracted samples keep orthogonal[19]. After sample learning by enforcing high dispersal, the extracted samples can more effectively represent the differential expression levels of genes and are conducive to select characteristic genes.

The Framework of SLDSF

In this subsection, firstly, the Sample Learning based Sparse Filtering (SLSF) method is presented. Then the SLSF method is expanded into SLDSF, a deep structure for learning more meaningful representations[14]. Denote a gene expression dataset as , where each row means a sample and each column means a gene. In order to eliminate the dimensional effect between indicators, the gene expression dataset is normalized into which is used to implement sample learning. Denote a sample distribution matrix over as . The element in is the activity of the i-th sample on the j-th gene. A sparse filter matrix which satisfies the soft-absolute function can be obtained. Each column in can be regarded as a sparse filter. Denote as the i-th row of and as the j-th column of . Similar to sparse filtering, the sample learning based sparse filtering also has three steps: normalizing by rows with the L2-norm: , then normalizing by columns with the L2-norm: and finally all the normalized elements are optimized for sparseness by using the L1-norm: . For m features in the gene expression dataset , the objective of the SLSF method can be written as SLSF can also be implemented by the L-BFGS method. The SLSF method can be regarded as the first layer of the SLDSF method. After training a single layer of samples with SLSF, one can compute the normalized samples and then use these as the input to SLDSF for learning the second layer of samples. The rest multiple layers can be learnt in the same manner. The framework of sample learning with SLDSF on gene expression data is described in Fig. 2.
Figure 2

The framework of sample learning with SLDSF on gene expression data.

The framework of sample learning with SLDSF on gene expression data. Firstly, the gene expression dataset is preprocessed by the following formulawhere mean() is the mean of gene expression data matrix by row, std() is the standard deviation of gene expression data matrix () by row, std() is the standard deviation of the expected matrix by row and mean() is the mean of the expected matrix by row. Here, std() and mean() are simply set to be 1 and 0 respectively. Secondly, the preprocessed matrix in Eq. (3) is regarded as the input layer to implement sample learning with SLDSF. In Fig. 2, suppose we need k layers in SLDSF, in addition to the input layer. We denote as the input layer which has n samples to be learned, as the output matrix of the k-th layer, as the t-th sample in the output matrix , L() as the sparse filter matrix of the of the k-th layer and as the optimal sparse filter matrix of the k-th layer. For Layer 1 in Fig. 2, the SLSF can be taken as the Layer 1 of SLDSF. Here, we denote as the sample distribution matrix of Layer 1, the objective function in Layer 1 can be written as L1(J), then we havewhere is the normalized matrix by normalizing via columns with the L2-norm: , and is the normalized matrix by normalizing L1() via rows with the L2-norm: . In order to obtain the optimal solution of Eq. (4), we use the Back Propagation (BP) method to adjust the sparse filter matrix L1(). The gradient of L1() on the objective function L1(J) in Eq. (4) can be written as With the chain rule, Eq. (5) can be expanded into the following formwhere is the gradient of L1() on L1(J) in Eq. (4). The objective function L1(J) and can be optimized by using the L-BFGS method[17] to achieve the optimal sparse filter matrix . The output matrix of Layer 1 is obtained by using After training the samples of Layer 1 in SLDSF, the optimal sample distribution matrix is obtained as the output of Layer 1. For Layer 2, we choose the feedforward network to train the samples. In Layer 2, we firstly normalize by rows, and then by columns using the L2-norm. The normalized is taken as the input to SLDSF for learning the second layer of samples. With the computation process of Layer 1, we can obtain the optimal sparse filter matrix and the output sample distribution matrix of Layer 2. The rest multiple layers can be learnt in the same manner. Finally, we can obtain the final output sample distribution matrix in Layer k. Note that, since SLDSF randomly initializes the sparse filter matrix, the results from running the SLDSF algorithm multiple times will not be exactly the same. The cancer characteristic genes are selected according to , and the detail ideas are presented in the following subsection. To summarize, the major steps of the proposed SLDSF algorithm are described in Table 1.
Table 1

The SLDSF algorithm.

Input: Gene expression dataset: B.The number of samples needs to be learned: t.The number of layers: k.Output: Optimal sample distribution matrix \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{k}({{\boldsymbol{S}}}^{{\rm{\Delta }}})$$\end{document}Lk(SΔ).
Initialize \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{1}({\boldsymbol{Y}})$$\end{document}L1(Y), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{2}({\boldsymbol{Y}})$$\end{document}L2(Y), \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\cdots $$\end{document}, \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{k}({\boldsymbol{Y}})$$\end{document}Lk(Y)Normalize gene expression dataset \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\boldsymbol{B}}$$\end{document}B by Eq. (3) as the input of Layer 1.for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i=1$$\end{document}i=1; \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$i\le k$$\end{document}ik; i++Obtain \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{i}(J)$$\end{document}Li(J) by Eq. (4)Calculate \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{1}({\rm{\Delta }}{\boldsymbol{Y}})$$\end{document}L1(ΔY) by Eq. (6)Update \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{i}({{\boldsymbol{Y}}}^{{\rm{\Delta }}})$$\end{document}Li(YΔ) by L-BFGS method until convergenceObtain \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{i}({{\boldsymbol{S}}}^{{\rm{\Delta }}})$$\end{document}Li(SΔ) by Eq. (7)Normalize \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{i}({{\boldsymbol{S}}}^{{\rm{\Delta }}})$$\end{document}Li(SΔ) by \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{2}$$\end{document}L2-norm as the input of Layer i + 1 end forOutput \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${L}_{k}({{\boldsymbol{S}}}^{{\rm{\Delta }}})$$\end{document}Lk(SΔ)
The SLDSF algorithm.

Cancer Characteristic Gene Selection by SLDSF

After being processed by SLDSF, the gene expression dataset can be better represented by the optimal sample distribution matrix since contains the desirable properties of the sample distribution. Therefore, cancer characteristic genes can be selected by exploring effectively. The main idea is explained as follows. The optimal sample distribution matrix can be described as According to Eq. (7), all elements in are non-negative. Then, we sum the elements by columns to obtain the evaluating vector Generally, the more differentially expressed the gene is, the larger the corresponding element in is. Hence, we can sort the items of in a descending order, and then take the top h genes as the characteristic ones.

Results and Discussion

This section reports several experimental results. We first test the proposed method on three publicly available microarray datasets, i.e., lung cancer dataset[20], leukemia dataset[21] and diffuse large B cell lymphoma (DLBCL) dataset[22]. We also test our method on two RNA-Seq datasets, i.e., esophageal cancer (ESCA) and squamous cell carcinoma of head and neck (HNSC). These five datasets are summarized in Table 2, and they can be found in Supplementary Datasets. To demonstrate the effectiveness of the proposed SLDSF method for selecting cancer characteristic genes, four commonly used gene selection methods: RGNMF[23], GNMF[24], RPCA[25] and PMD[26] are employed for comparison. The detailed method description can be found in Supplementary S1. We also provide the codes of all methods used in this paper in Supplementary Codes. In this paper, the programs were implemented by using Matlab2014a on a PC equipped with an Intel Core i5 and 8 GB memory.
Table 2

Summary of gene expression datasets.

DatasetNameNumber of
GenesSamplesClasses
MicroarrayLung CancerLung adenocarcinomas, squamous cell lung carcinomas, pulmonary carcinoids, small-cell lung carcinomas cases, normal lung samples126002035
LeukemiaAcute myelogenous leukemia, acute lymphoblastic leukemia5000382
DLBCL‘Cured’ patients, ‘fatal/refractory’ patients7129582
RNA-SeqESCADiseased samples, normal samples205021922
HNSCDiseased samples, normal samples205024182
Summary of gene expression datasets.

Gene Ontology Analysis

For fair comparisons, 100 genes were selected by SLDSF, RGNMF, GNMF, RPCA and PMD methods. The 100 genes selected by SLDSF can be found in Supplementary S2. The GO (Gene Ontology) enrichment of functional annotation of the selected characteristic genes by the five methods was detected by ToppFun which can be used to describe characteristic genes in the input or query set and to help discover what functions these genes may have in common[27],[28]. The tool is publicly available at http://toppgene.cchmc.org/enrichment.jsp. In this paper, GO: Biological Process is the main objective to analysis.

Test on Microarray Datasets

This subsection reports experimental results on three microarray datasets: lung cancer dataset, leukemia dataset and DLBCL dataset. SLDSF is a deep structure for sample learning. We first tested the influence of the number of layers and the number of samples. The results can be found in Supplementary S3. From Supplementary S3, the proposed SLDSF can obtain the best results on all three datasets when the numbers of layers and samples are 3 and 200, respectively. So we adopt the 3-Layer SLDSF with 200 samples in the later comparisons. The results of five methods on lung cancer dataset, leukemia dataset and DLBCL dataset were summarized in Tables 3, 4 and 5, respectively. In the tables, the best results among five methods were shown in bold. For simplicity, only the P-values of top 10 GO terms were shown in this paper.
Table 3

The P-Values of GO terms corresponding to different methods on the lung cancer dataset.

IDNameSLDSFRGNMFGNMFRPCAPMD
P-ValueP-ValueP-ValueP-ValueP-Value
GO:0000184nuclear-transcribed mRNA catabolic process, nonsense-mediated decay5.05E-722.16E-163.16E-16None5.24E-15
GO:0006614SRP-dependent cotranslational protein targeting to membrane7.03E-722.77E-164.04E-16None6.58E-15
GO:0006613cotranslational protein targeting to membrane1.69E-714.47E-166.53E-16None1.02E-14
GO:0045047protein targeting to ER9.22E-717.09E-161.04E-15None1.56E-14
GO:0072599establishment of protein localization to endoplasmic reticulum4.68E-709.91E-161.45E-15None2.12E-14
GO:0070972protein localization to endoplasmic reticulum4.61E-675.15E-157.50E-15None9.63E-14
GO:0019080viral gene expression5.18E-643.47E-145.19E-14None4.49E-13
GO:0044033multi-organism metabolic process4.62E-636.77E-141.01E-13None1.01E-13
GO:0019083viral transcription6.96E-633.91E-135.66E-13None5.14E-12
GO:0006415translational termination5.27E-625.94E-158.91E-15None8.79E-14
Table 4

The P-Values of GO terms corresponding to different methods on the leukemia dataset.

IDNameSLDSFRGNMFGNMFRPCAPMD
P-ValueP-ValueP-ValueP-ValueP-Value
GO:0006955immune response2.69E-184.14E-122.76E-113.45E-151.83E-11
GO:0001775cell activation8.94E-181.40E-141.35E-135.14E-198.60E-13
GO:0045321leukocyte activation2.28E-165.89E-135.34E-114.72E-164.01E-11
GO:0007159leukocyte cell-cell adhesion5.86E-163.56E-134.58E-156.05E-144.07E-11
GO:0046649lymphocyte activation8.59E-163.13E-122.63E-092.95E-152.43E-11
GO:0016337single organismal cell-cell adhesion1.11E-152.86E-122.02E-094.44E-122.10E-12
GO:0034109homotypic cell-cell adhesion2.11E-151.05E-121.34E-091.26E-141.05E-10
GO:0070486leukocyte aggregation2.43E-151.60E-122.40E-092.00E-141.82E-10
GO:0098602single organism cell adhesion4.87E-151.01E-127.14E-101.42E-117.25E-13
GO:0050776regulation of immune response9.00E-157.66E-114.01E-091.13E-125.59E-11
Table 5

The P-Values of GO terms corresponding to different methods on the DLBCL dataset.

IDNameSLDSFRGNMFGNMFRPCAPMD
P-ValueP-ValueP-ValueP-ValueP-Value
GO:0006614SRP-dependent cotranslational protein targeting to membrane1.70E-934.29E-903.66E-911.94E-352.65E-92
GO:0006613cotranslational protein targeting to membrane5.05E-931.23E-891.05E-903.03E-357.62E-92
GO:0045047protein targeting to ER4.13E-929.48E-898.10E-907.19E-355.87E-91
GO:0072599establishment of protein localization to endoplasmic reticulum3.07E-916.65E-885.69E-891.65E-344.12E-90
GO:0000184nuclear-transcribed mRNA catabolic process, nonsense-mediated decay1.30E-902.72E-872.32E-882.46E-361.68E-89
GO:0070972protein localization to endoplasmic reticulum1.46E-872.51E-842.15E-855.78E-331.56E-86
GO:0006414translational elongation1.47E-821.84E-791.26E-802.02E-301.57E-80
GO:0006415translational termination2.12E-812.51E-782.16E-792.61E-302.80E-80
GO:0019080viral gene expression4.89E-815.62E-784.33E-797.12E-312.67E-79
GO:0044033multi-organism metabolic process6.33E-806.82E-775.27E-783.02E-303.40E-79
The P-Values of GO terms corresponding to different methods on the lung cancer dataset. The P-Values of GO terms corresponding to different methods on the leukemia dataset. The P-Values of GO terms corresponding to different methods on the DLBCL dataset.

Test on the lung dataset

Lung cancer is the second most common cause of cancer-related death in women and the most common in men. In this paper, the lung cancer dataset presented by Bhattacharjee et al.[20] was adopted in our experiments. In this dataset, there are 12600 genes in 203 samples. The 203 samples include histologically defined lung adenocarcinomas (139 samples), squamous cell lung carcinomas (21 samples), pulmonary carcinoids (20 samples), small-cell lung carcinomas cases (6 samples), and normal lung samples (17 samples). Table 3 shows the P-Values of top 10 closely related lung cancer GO terms corresponding to the characteristic genes selected by five methods: SLDSF, RGNMF, GNMF, RPCA and PMD. In this table, ‘None’ denotes that the method cannot select genes in the GO term. SLDSF, RGNMF, GNMF and PMD can select genes in the 10 GO terms while RPCA cannot. This means that the genes selected by SLDSF, RGNMF, GNMF and PMD may have similar biological processes. In all the 10 GO terms, the SLDSF method provides much better performances than other four methods. The genes selected by SLDSF need to be further analyzed. A Venn diagram of genes selected by five methods is shown in Fig. 3(a). We denote the ‘unique’ characteristic gene as the gene selected only by one method. From Fig. 3(a), it can be seen that there are 9 genes shared by all five methods and SLDSF can select more ‘unique’ characteristic genes (up to 81 ‘unique’ characteristic genes) than other methods. This explains why SLDSF can obtain much better performance than other methods in the GO terms in Table 3 and indicates that the 81 ‘unique’ genes are closely associated with these GO terms. The ‘unique’ characteristic genes selected by SLDSF should be further investigated to determine whether they are associated with lung cancer.
Figure 3

Venn diagram of genes selected by five methods on (a) lung cancer dataset, (b) leukemia dataset and (c) DLBCL dataset.

Venn diagram of genes selected by five methods on (a) lung cancer dataset, (b) leukemia dataset and (c) DLBCL dataset. We studied the ‘unique’ genes selected by SLDSF according to the existing literature. The top 5 ‘unique’ characteristic genes selected by SLDSF are analyzed and they are shown in bold in the following explanations. For gene GAPDH (35905_s_at), it was shown that the levels of GAPDH protein were significantly up-regulated in lung squamous cell carcinoma tissues by clinical tissue studies[29]. MAPK1, SRC, SMAD4, EEF1A1 (1288_s_at), TRAF2 and PLCG1 might be involved in smoking-induced lung cancer by interacting with each other which indicated that they might be responsible for the development of smoking-induced lung cancer[30]. IGHV4-31 (37864_s_at) has been detected as a candidate gene in peripheral blood mononuclear cells (PBMC) and tumor tissue groups of non-small cell lung cancer[31]. CYAT1 (33273_f_at) is one of the most frequently ranked genes responsible for that clustering through the method proposed by Mondal et al.[32] on the lung dataset. Czajkowski et al. reported perfect classification accuracy with only 3 genes: 37947_at, 33499_s_at (IGHA2) and 36528_at on the lung cancer dataset, indicating that these 3 genes are very crucial for lung cancer[33].

Test on the leukemia dataset

The leukemia dataset has already become a benchmark dataset in cancer gene selection. It consists of 11 cases of acute myelogenous leukemia and 27 cases of acute lymphoblastic leukemia[21]. The leukemia dataset is summarized by a 5000 × 38 matrix (5000 genes in 38 samples) for further study. The P-Values of the top 10 closely related leukemia GO terms corresponding to the characteristic genes selected by five methods are shown in Table 4. From Table 4, it can be found that, for 9 GO terms, the SLDSF method outperforms RGNMF, GNMF, RPCA and PMD methods. RPCA has the lowest P-value in the term GO:0001775. To further study the selected genes by these methods on the leukemia dataset, a Venn diagram is shown in Fig. 3(b). In Fig. 3(b), we can observe that there are 41 genes shared by all five methods. The SLDSF method can select 7 ‘unique’ characteristic genes which are neglected by the other methods. Moreover, we verified these ‘unique’ genes according to the existing literature to determine whether these genes are associated with leukemia or not. The top 5 ‘unique’ characteristic genes selected by SLDSF are analyzed and they are shown in bold in the following explanations. LAPTM5 (J04990_at) decreased autophagy activity and might represent a potential target modulating autophagy activity to increase sensitivity to chemotherapy in treatment of leukemia[34]. FOS (J04130_s_at) has a significant function in regulating cell proliferation, cell differentiation and cell transformation in leukemia and it was detected and validated in the paper[35]. Immune-related gene LYZ (U49835_s_at) were highly expressed in THP1 cells in leukemia[36]. According to[37], as a direct target of activated NOTCH1, CCND3 (M21624_at) is up-regulated in T-cell acute lymphoblastic leukemia. By mediating JUNB (X60486_at), miRNA-149 promotes cell proliferation and inhibits apoptosis in T-cell acute lymphoblastic leukemia[38].

Test on the DLBCL dataset

Diffuse large B cell lymphoma (DLBCL) is the most common lymphoid malignancy in adults. Here, we adopt the DLBCL dataset presented by Shipp et al.[22]. This dataset contains 7129 genes in 58 cancer samples. DLBCL study patients were divided into 2 discrete categories: 32 ‘cured’ patients and 26 ‘fatal/refractory’ patients. Table 5 lists the P-Values of the top 10 closely related DLBCL GO terms corresponding to the characteristic genes selected by five methods. From Table 5, it can be seen that SLDSF provides better performances than that of other methods for all 10 terms. To further study the genes selected by these methods on the DLBCL dataset, a Venn diagram is shown in Fig. 3(c). From Fig. 3(c), we can find that there are 56 genes shared by all five methods. SLDSF, RGNMF and GNMF have no ‘unique’ characteristic genes, and PMD has only 2 ‘unique’ characteristic genes. This suggests that the results of SLDSF, RGNMF, GNMF and PMD in Table 5 are very similar. There are 30 ‘unique’ characteristic genes are selected by RPCA, this may explain why RPCA has worse performance in Table 5.

Test on RNA-Seq Datasets

The Cancer Genome Atlas (TCGA) plan attempts to apply genomic analysis techniques, especially the use of large-scale genome sequencing, to draw all human cancers genome variation map. In this section, we choose two kinds of RNA-Seq datasets, i.e., esophageal cancer (ESCA) and squamous cell carcinoma of head and neck (HNSC), which can be downloaded from TCGA (http://tcgadata.nci.nih.gov/tcga/). Here, we also adopt the 3-Layer SLDSF with 200 samples. Since RGNMF and GNMF cannot select genes in the GO terms on the two datasets, we only compared SLDSF, RPCA and PMD. The results of SLDSF, RPCA and PMD on ESCA dataset and HNSC dataset are summarized in Tables 6 and 7, respectively. In the tables, the best results among three methods are shown in bold. For simplicity, only the P-values of top 10 GO terms for each method are shown in this paper.
Table 6

The P-Values of GO terms corresponding to different methods on the ESCA dataset.

IDNameSLDSFRPCAPMD
P-ValueP-ValueP-Value
GO:0042060wound healing7.30E-168.20E-137.56E-12
GO:0009611response to wounding1.38E-124.01E-104.01E-10
GO:0022610biological adhesion2.01E-125.40E-143.37E-13
GO:0006955immune response3.37E-129.95E-119.95E-11
GO:0007155cell adhesion9.34E-122.71E-131.63E-12
GO:0043588skin development1.06E-111.06E-11None
GO:0007010cytoskeleton organization8.65E-111.39E-088.65E-11
GO:0050776regulation of immune response9.56E-116.12E-103.70E-09
GO:0034109homotypic cell-cell adhesion1.92E-101.59E-081.92E-10
GO:0098609cell-cell adhesion5.20E-103.04E-093.04E-09
Table 7

The P-Values of GO terms corresponding to different methods on the HNSC dataset.

IDNameSLDSFRPCAPMD
P-ValueP-ValueP-Value
GO:0042060wound healing9.46E-165.38E-111.69E-11
GO:0031581hemidesmosome assembly6.00E-142.27E-09None
GO:0009611response to wounding1.80E-121.09E-082.88E-08
GO:0022610biological adhesion2.78E-125.73E-099.48E-10
GO:0034330cell junction organization4.26E-125.69E-101.25E-07
GO:0043588skin development1.24E-117.65E-187.50E-27
GO:0007010cytoskeleton organization1.88E-112.56E-076.43E-07
GO:0034329cell junction assembly3.16E-115.69E-101.19E-06
GO:0045104intermediate filament cytoskeleton organization6.83E-115.75E-118.77E-11
GO:0007155cell adhesion6.85E-112.21E-087.91E-10
The P-Values of GO terms corresponding to different methods on the ESCA dataset. The P-Values of GO terms corresponding to different methods on the HNSC dataset.

Test on the ESCA dataset

The ESCA data are the RNA-Seq data of esophageal cancer. It includes 192 samples and 20502 genes. There are 9 normal samples and 183 diseased samples. Table 6 shows the P-Values of the top 10 closely related ESCA GO terms corresponding to the characteristic genes selected by three methods: SLDSF, RPCA and PMD. In this table, ‘None’ denotes that the method cannot select genes in the GO term. SLDSF outperforms RPCA and PMD in 5 GO terms. In GO:0043588, SLDSF has the best performance, same as RPCA. In GO:0007010 and GO:0034109, SLDSF has the best performance, same as PMD. In GO:0022610 and GO:0007155, RPCA has the lowest P-Values. A Venn diagram of genes selected by three methods is shown in Fig. 4(a). We denote the ‘unique’ characteristic gene as the gene selected only by one method while neglected by other methods. From Fig. 4(a), there are 63 genes shared by all methods and SLDSF can select 8 ‘unique’ characteristic genes. The ‘unique’ characteristic genes should be further investigated to determine whether they are associated with ESCA.
Figure 4

The Venn diagram of genes selected by three methods on (a) ESCA dataset and (b) HNSC dataset.

The Venn diagram of genes selected by three methods on (a) ESCA dataset and (b) HNSC dataset. We studied the ‘unique’ genes selected by SLDSF according to the existing literatures. The top 5 ‘unique’ characteristic genes selected by SLDSF are analyzed, and they are shown in bold in the following explanations. Shen et al. have performed the first GWAS (Genome-wide Association Study) of esophageal squamous cell carcinoma in the MHC (Major Histocompatibility Complex) region on the subjects from high risk areas in northern China and found three important independent susceptibility loci containing three biologically interesting candidate genes, i.e., HLA-DQA1, TRIM27 and DPCR1[39]. Li et al. found that DRD2/PPP1R1B (also known as DARPP-32) expression is associated with tumor progression and that DRD2/ PPP1R1B expression may help predict prognosis in patients with esophageal squamous cell carcinoma[40]. In[41], MUC17, MUC5B and MUC6 gene mutations in tumor region T4A of esophageal squamous cell carcinoma predict the perturbation of O-glycan biosynthesis and processing. The presence of activating mutations within EGFR in esophageal adenocarcinomas defines a previously unrecognized subset of gastrointestinal tumors in which EGFR signaling may play an important, biological role[42]. According to an analysis of genes strongly up-regulated in both esophageal adenocarcinoma and Barrett’s esophagus, REG4 might be of particular interest as an early marker for esophageal adenocarcinoma[43].

Test on the HNSC dataset

The HNSC data are the RNA-Seq data of squamous cell carcinoma of head and neck. It includes 418 samples and 20502 genes. There are 20 normal samples and 398 diseased samples. Table 7 shows the P-Values of the top 10 closely related HNSC GO terms corresponding to the characteristic genes selected by three methods. SLDSF outperforms other methods in 8 GO terms. In GO:0043588, PMD has the best performance. In GO:0045104, RPCA is a little better than SLDSF. To further study the genes selected by these methods on HNSC dataset, a Venn diagram is shown in Fig. 4(b). There are 43 genes shared by all three methods. SLDSF can select 13 ‘unique’ characteristic genes. We verified these ‘unique’ genes according to the existing literature to determine whether these genes are associated with HNSC or not. Here, the top 5 ‘unique’ characteristic genes selected by SLDSF are investigated. Kinoshita et al. demonstrated that LAMB3 functions as an oncogene and strongly contributes to cancer cell migration and invasion in HNSC[44]. CD44 isoforms mediate migration, proliferation, and cisplatin sensitivity in HNSC. Furthermore, expression of certain CD44 variants may be important molecular markers for HNSC progression[45]. HSP90AA1 and CTSD are down-regulated in HNSC after the combination treatment of cilengitide and cisplatin when compared to cisplatin alone[46]. CTL1 was identified as an up-regulated gene in HNSC[47].

Global Cancer Genes Selected by SLDSF

We have used SLDSF to selected characteristic genes for different cancer types and subtypes. However the results of using our method for global cancer genes selection (independent of type/subtype) have not been discussed yet. These global genes may play an important role in the development of multiple cancers. For microarray datasets, 3 global cancer genes (CD74, FTL and HLA-DRA) are selected by SLDSF from lung cancer dataset, leukemia dataset and DLBCL dataset. The functional description of these genes is as follows. The protein encoded by CD74 associates with class II major histocompatibility complex (MHC) and is an important chaperone that regulates antigen presentation for immune response. It also serves as a cell surface receptor for the cytokine macrophage migration inhibitory factor (MIF) which, when bound to the encoded protein, initiates survival pathways and cell proliferation. This protein also interacts with amyloid precursor protein (APP) and suppresses the production of amyloid beta (Abeta). FTL encodes the light subunit of the ferritin protein. Variations in ferritin subunit composition may affect the rates of iron uptake and release in different tissues. A major function of ferritin is the storage of iron in a soluble and nontoxic state. Defects in this light chain ferritin gene are associated with several neurodegenerative diseases and hyperferritinemia-cataract syndrome. HLA-DRA is one of the HLA class II alpha chain paralogues. Class II molecules are expressed in antigen presenting cells (APC: B lymphocytes, dendritic cells, macrophages). For RNA-Seq datasets, there are 63 global cancer genes are selected by SLDSF from ESCA and HNSC datasets. This may indicate that ESCA and HNSC have many identical characteristic genes. For simplicity, the functional descriptions of 3 global genes (ACTB, COL1A1 and KRT13) are reported as follows. ACTB encodes one of six different actin proteins. Mutations in this gene cause Baraitser-Winter syndrome 1, which is characterized by intellectual disability with a distinctive facial appearance in human patients. COL1A1 encodes the pro-alpha1 chains of type I collagen. Mutations in this gene are associated with osteogenesis imperfecta types I-IV, Ehlers-Danlos syndrome type VIIA, Ehlers-Danlos syndrome Classical type, Caffey Disease and idiopathic osteoporosis. Reciprocal translocations between chromosomes 17 and 22, where this gene and the gene for platelet-derived growth factor beta are located, are associated with a particular type of skin tumor called dermatofibrosarcoma protuberans. The protein encoded by KRT13 is a member of the keratin gene family. Mutations in this gene and keratin 4 have been associated with the autosomal dominant disorder White Sponge Nevus. It is worth noting that FTL can be selected by SLDSF on all five datasets. It would be interesting to see how SLDSF performs for selecting genes that are already well-known and validated oncogenes and/or suppressors. SLDSF can successfully select oncogenes when tested on five gene expression datasets. For example, three oncogenes: FOS, LCK, MYB are selected in the leukemia dataset. Four oncogenes: ERBB2, LCN2, EGFR and CCND1 can be selected in the ESCA dataset. SLDSF can also select suppressors from five gene expression datasets, for instance, RPL10 in the lung cancer, EGFR and ERBB2 in ESCA, and EEF1A1 in lung cancer, DLBCL, ESCA and HNSC. Note that EGFR and ERBB2 in ESCA data are both oncogenes and suppressors.

Conclusions

Identifying cancer characteristic genes is important to understand the underlying genetics and the prognostic assessment of cancer. In this paper, we proposed a novel unsupervised characteristic gene selection method, SLDSF, based on sample learning and deep sparse filtering. Using sample learning to transform the sample space of the gene expression data, the genes can be better represented in the transformed sample space. By using sparse filtering to implement sample learning to avoid explicit modeling of the data distribution, sample learning can be achieved in a simple formulation effectively. Furthermore, for the gene expression data, we provide a detailed explanation of how sample learning satisfies three desirable characteristics of the sample distribution (population sparsity, high dispersal and lifetime sparsity) in sparse filtering. While traditional unsupervised characteristic gene selection methods do not take the deep structure into account, the proposed SLDSF explores deep sparse filtering to implement sample learning, with the advantage that multi-layers may learn more meaningful representations than a single layer. In summary, the main contributions of this paper are described as follows: - A deep learning structure, deep sparse filtering, is proposed for selecting cancer characteristic genes for the first time in the literature. - We propose a novel idea, sample learning, for transforming the sample space of the gene expression data to select genes with deep learning. This enables us to better understand feature representations by the transformed sample space. We investigated the number of layers and the number of samples in the proposed SLDSF method on five real gene expression datasets: lung cancer dataset, leukemia dataset, DLBCL dataset, ESCA dataset and HNSC dataset. The results of SLDSF were compared with four characteristic gene selection methods: RGNMF, GNMF, RPCA and PMD. Experimental studies on gene expression datasets consistently suggest that, SLDSF is more effective than other four methods for selecting cancer characteristic genes. Especially on the lung cancer dataset, the proposed SLDSF method significantly outperforms other four methods. The ‘unique’ genes selected by SLDSF are shown closely associated with the specific cancer dataset according to the current literatures. Furthermore, global cancer genes selected by SLDSF are analyzed. It is observed that SLDSF can find many oncogenes and/or suppressors from the studied five datasets. The main limitation of this paper is its related biological explanations of the selected cancer characteristic genes. In this paper, we use GO analysis to evaluate the effectiveness of SLDSF and justify the selected genes based on the existing literature. Although GO analysis may not be a strong authentication way to validate an algorithm, it is recommended as an approach to evaluate the method in many papers[6,23]. However, the selected genes should be verified in biological experiments by biologists to find more meaningful biological explanations. In future, we will explore more on biological meanings of the selected cancer characteristic genes. Supplementary S1 Supplementary S2 Supplementary S3 Supplementary Dataset1
  33 in total

1.  Diffuse large B-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning.

Authors:  Margaret A Shipp; Ken N Ross; Pablo Tamayo; Andrew P Weng; Jeffery L Kutok; Ricardo C T Aguiar; Michelle Gaasenbeek; Michael Angelo; Michael Reich; Geraldine S Pinkus; Tane S Ray; Margaret A Koval; Kim W Last; Andrew Norton; T Andrew Lister; Jill Mesirov; Donna S Neuberg; Eric S Lander; Jon C Aster; Todd R Golub
Journal:  Nat Med       Date:  2002-01       Impact factor: 53.440

2.  Synergistic Drug Combinations with a CDK4/6 Inhibitor in T-cell Acute Lymphoblastic Leukemia.

Authors:  Yana Pikman; Gabriela Alexe; Giovanni Roti; Amy Saur Conway; Andrew Furman; Emily S Lee; Andrew E Place; Sunkyu Kim; Chitra Saran; Rebecca Modiste; David M Weinstock; Marian Harris; Andrew L Kung; Lewis B Silverman; Kimberly Stegmaier
Journal:  Clin Cancer Res       Date:  2016-11-09       Impact factor: 12.531

3.  Dynamic transcriptomes of human myeloid leukemia cells.

Authors:  Hai Wang; Haiyan Hu; Qian Zhang; Yadong Yang; Yanming Li; Yang Hu; Xiuyan Ruan; Yaran Yang; Zhaojun Zhang; Chang Shu; Jiangwei Yan; Edward K Wakeland; Quanzhen Li; Songnian Hu; Xiangdong Fang
Journal:  Genomics       Date:  2013-06-24       Impact factor: 5.736

4.  Identification of gene markers in the development of smoking-induced lung cancer.

Authors:  Zhao Yang; Bing Zhuan; Ying Yan; Simin Jiang; Tao Wang
Journal:  Gene       Date:  2015-10-27       Impact factor: 3.688

5.  Extracting Cross-Ontology Weighted Association Rules from Gene Ontology Annotations.

Authors:  Giuseppe Agapito; Marianna Milano; Pietro Hiram Guzzi; Mario Cannataro
Journal:  IEEE/ACM Trans Comput Biol Bioinform       Date:  2016 Mar-Apr       Impact factor: 3.710

6.  RPCA-Based Tumor Classification Using Gene Expression Data.

Authors:  Jin-Xing Liu; Yong Xu; Chun-Hou Zheng; Heng Kong; Zhi-Hui Lai
Journal:  IEEE/ACM Trans Comput Biol Bioinform       Date:  2015 Jul-Aug       Impact factor: 3.710

7.  DRD2/DARPP-32 expression correlates with lymph node metastasis and tumor progression in patients with esophageal squamous cell carcinoma.

Authors:  Li Li; Masaki Miyamoto; Yuma Ebihara; Seiji Mega; Ryo Takahashi; Ryunosuke Hase; Hiroyuki Kaneko; Masatoshi Kadoya; Tomoo Itoh; Toshiaki Shichinohe; Satoshi Hirano; Satoshi Kondo
Journal:  World J Surg       Date:  2006-09       Impact factor: 3.352

8.  Classification of human lung carcinomas by mRNA expression profiling reveals distinct adenocarcinoma subclasses.

Authors:  A Bhattacharjee; W G Richards; J Staunton; C Li; S Monti; P Vasa; C Ladd; J Beheshti; R Bueno; M Gillette; M Loda; G Weber; E J Mark; E S Lander; W Wong; B E Johnson; T R Golub; D J Sugarbaker; M Meyerson
Journal:  Proc Natl Acad Sci U S A       Date:  2001-11-13       Impact factor: 11.205

9.  Robust PCA based method for discovering differentially expressed genes.

Authors:  Jin-Xing Liu; Yu-Tian Wang; Chun-Hou Zheng; Wen Sha; Jian-Xun Mi; Yong Xu
Journal:  BMC Bioinformatics       Date:  2013-05-09       Impact factor: 3.169

10.  Tumor suppressive microRNA-218 inhibits cancer cell migration and invasion through targeting laminin-332 in head and neck squamous cell carcinoma.

Authors:  Takashi Kinoshita; Toyoyuki Hanazawa; Nijiro Nohata; Naoko Kikkawa; Hideki Enokida; Hirofumi Yoshino; Takeshi Yamasaki; Hideo Hidaka; Masayuki Nakagawa; Yoshitaka Okamoto; Naohiko Seki
Journal:  Oncotarget       Date:  2012-11
View more
  4 in total

1.  A Hybrid Gene Selection Method Based on ReliefF and Ant Colony Optimization Algorithm for Tumor Classification.

Authors:  Lin Sun; Xianglin Kong; Jiucheng Xu; Zhan'ao Xue; Ruibing Zhai; Shiguang Zhang
Journal:  Sci Rep       Date:  2019-06-20       Impact factor: 4.379

2.  Identification of key modules and hub genes for small-cell lung carcinoma and large-cell neuroendocrine lung carcinoma by weighted gene co-expression network analysis of clinical tissue-proteomes.

Authors:  Haruhiko Nakamura; Kiyonaga Fujii; Vipul Gupta; Hiroko Hata; Hirotaka Koizumu; Masahiro Hoshikawa; Saeko Naruki; Yuka Miyata; Ikuya Takahashi; Tomoyuki Miyazawa; Hiroki Sakai; Kouhei Tsumoto; Masayuki Takagi; Hisashi Saji; Toshihide Nishimura
Journal:  PLoS One       Date:  2019-06-05       Impact factor: 3.240

Review 3.  Machine Learning Based Computational Gene Selection Models: A Survey, Performance Evaluation, Open Issues, and Future Research Directions.

Authors:  Nivedhitha Mahendran; P M Durai Raj Vincent; Kathiravan Srinivasan; Chuan-Yu Chang
Journal:  Front Genet       Date:  2020-12-10       Impact factor: 4.599

4.  An ensemble machine learning model based on multiple filtering and supervised attribute clustering algorithm for classifying cancer samples.

Authors:  Shilpi Bose; Chandra Das; Abhik Banerjee; Kuntal Ghosh; Matangini Chattopadhyay; Samiran Chattopadhyay; Aishwarya Barik
Journal:  PeerJ Comput Sci       Date:  2021-09-16
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.