Literature DB >> 29950006

Random forest based similarity learning for single cell RNA sequencing data.

Maziyar Baran Pouyan1, Dennis Kostka1,2.   

Abstract

Motivation: Genome-wide transcriptome sequencing applied to single cells (scRNA-seq) is rapidly becoming an assay of choice across many fields of biological and biomedical research. Scientific objectives often revolve around discovery or characterization of types or sub-types of cells, and therefore, obtaining accurate cell-cell similarities from scRNA-seq data is a critical step in many studies. While rapid advances are being made in the development of tools for scRNA-seq data analysis, few approaches exist that explicitly address this task. Furthermore, abundance and type of noise present in scRNA-seq datasets suggest that application of generic methods, or of methods developed for bulk RNA-seq data, is likely suboptimal.
Results: Here, we present RAFSIL, a random forest based approach to learn cell-cell similarities from scRNA-seq data. RAFSIL implements a two-step procedure, where feature construction geared towards scRNA-seq data is followed by similarity learning. It is designed to be adaptable and expandable, and RAFSIL similarities can be used for typical exploratory data analysis tasks like dimension reduction, visualization and clustering. We show that our approach compares favorably with current methods across a diverse collection of datasets, and that it can be used to detect and highlight unwanted technical variation in scRNA-seq datasets in situations where other methods fail. Overall, RAFSIL implements a flexible approach yielding a useful tool that improves the analysis of scRNA-seq data. Availability and implementation: The RAFSIL R package is available at www.kostkalab.net/software.html. Supplementary information: Supplementary data are available at Bioinformatics online.

Entities:  

Mesh:

Year:  2018        PMID: 29950006      PMCID: PMC6022547          DOI: 10.1093/bioinformatics/bty260

Source DB:  PubMed          Journal:  Bioinformatics        ISSN: 1367-4803            Impact factor:   6.937


1 Introduction

Sequencing transcriptomes of single cells (scRNA-seq) is becoming increasingly common, as technology evolves and costs decline. Studying gene expression genome-wide at single cell resolution overcomes intrinsic limitations of bulk RNA sequencing, where expression levels are averaged over thousands or millions of cells. scRNA-seq enables researchers to more rigorously address questions about the cellular composition of tissues, the transcriptional heterogeneity and structure of ‘cell types’, and how this may change, for instance during development or in disease (Kumar ; Patel ). Identifying group structure is therefore a crucially important step in most scRNA-seq data analyses, and it has yielded exciting discoveries of novel cell types and revealed previously un-appreciated sub-populations and heterogeneity of known types of cells (Kumar ). Identifying group structure in scRNA-seq data is, however, not without challenges. Even for bulk RNA sequencing no gold standard has emerged in the field (Conesa ), and for single cell RNA sequencing several factors further complicate the task. These include additional biological heterogeneity induced by the inherent stochasticity of gene expression in single cells, and technical noise rooted in cell processing, cell lysis and library preparation from extremely low amounts of ‘input’ messenger RNA (Adam ). The latter, for example, leads to dropout events, where no RNA is measured for a gene actually expressed in a cell. It is estimated that 50–95% of a cell’s mRNA are not measured by current technologies (Adam ; Svensson ). While the relative magnitude of such factors will depend on the specific technology used, it is fair to assume they play a role in most, if not all, scRNA-seq studies. Therefore, there is a need for computational approaches that take the specific nature of scRNA-seq data into account and enable researchers to accurately and reliably identify, visualize and explore group (or population) structure of single cells. To address that need we developed RAFSIL, a random forest (RF) based method for learning similarities between cells from single cell RNA sequencing experiments. Related work includes clustering methods, which implicitly or explicitly rely on a similarity concept and are commonly used to group objects. Examples of approaches developed specifically for scRNA-seq data include the combination of Pearson correlation with robust k-means clustering (Grün ), and the use of consensus clustering (Strehl and Ghosh, 2002) to obtain stable cell groupings by Kiselev ). Žurauskienė and Yau (2016) combine agglomerative clustering with principal component analysis (PCA), while Lin explore the use of neural networks (NNs) (Hagan ) for clustering and dimension reduction. More closely related to our work is SIMLR (Wang b), an approach based on multiple kernel learning (Lanckriet ) that directly learns similarities between single cells. However, SIMLR is built around a clustering paradigm, and the user is asked to provide the algorithm with a specific cluster number to guide similarity learning. In contrast, RAFSIL similarities are based on random forests (RFs) (Breiman, 2001), and our approach requires no prior information about group structure. We show RAFSIL learns similarities that faithfully represent group structure in scRNA-seq data; when used for dimension reduction and clustering they provide an accurate visualization of datasets and enable exploratory analyses for cell type identification and discovery. Importantly, RAFSIL compares favorably with the current state-of-the-art showing high accuracy and robustness, and we demonstrate how it enables the identification of technical variation that remains hidden with other approaches.

2 Methods

We assume normalized gene expression data on log-scale of n cells for p genes is available, organized into a p × n expression matrix , where indicates the expression of p genes in cell i.

2.1 Gene filtering

We consider three types of gene filters for the scRNA-seq data matrix : All genes (ALL): All genes in are considered that have non-zero expression in at least one cell in the dataset. This is the most inclusive set of genes. Frequency filtering (FRQ): Here, we consider only genes that are expressed in a certain fraction of cells. Specifically, we choose 6%, as reported by Kiselev for our analyses. Highly expressed genes (HiE): The subset of frequency-filtered genes is further narrowed down to consider genes with ‘high’ expression across cells. In each cell, expressed genes are sorted in decreasing order of expression and the top 10% are marked as highly expressed. To focus on genes that are frequently highly expressed across cells, we discard half of the genes that are highly expressed in the fewest cells. This approach yields a set of genes that are highly expressed across cells, but still allows for variability in gene expression. In the following, we describe our approach for random forest based similarity learning (RAFSIL) from scRNA-seq data. We developed two methods, RAFSIL1 and RAFSIL2, which are both two-step procedures. They share a feature-construction step and then apply different types of RF based similarity learning.

2.2 RAFSIL: feature construction

2.2.1 RAFSIL gene filtering and clustering

For the RAFSIL methods, we apply the frequency filter described above, and then derive gene clusters as follows: first, PCA is applied to the gene-filtered expression matrix (treating genes as observations and cells as features), and we keep the most informative principal components as selected by the ‘elbow method’ (Thorndike, 1953). Next, we apply k-means clustering (kmeans++; Arthur and Vassilvitskii, 2007; Mouselimis, 2017) to this reduced representation of genes and derive gene clusters, where we determine the number of clusters by finding the elbow point of the sum of squared errors as a function of increasing cluster numbers. This yields a partition of frequency-selected genes into k disjoint clusters.

2.2.2 RAFSIL Spearman feature space construction

Gene clustering decomposes the column space of into orthogonal sub-spaces, and we characterize each cell based on its similarities with all other cells in each sub-space. Specifically, we calculate n × n cell–cell similarity matrices using Spearman rank correlation and genes restricted to the respective clusters derived beforehand. Spearman rank correlation is used rather than Pearson correlation because of its robustness to outliers (Gentleman ). For each similarity matrix C we then perform PCA, and again keep m informative principal components identified by the elbow method. This yields k matrices based on genes in cluster i, where each cell is embedded by its principal components derived from local similarities (i.e., similarities calculated using only genes in a gene cluster). We then construct a final feature matrix by juxtaposing matrices from individual gene clusters: The number of columns in (i.e. the number of features ) is data-dependent, and each cell j is now described by a feature vector (the j-th row of ). In the following, we use these features for RF based similarity learning.

2.3 RAFSIL: RAndom Forest based SImilarity Learning

RFs are an established classification method based on ensembles of decision trees (Breiman, 2001). However, they can also be used in an unsupervised setting to infer similarities between objects (Breiman and Cutler, 2003; Shi and Horvath, 2006). Here, we present two variations of this general strategy.

2.3.1 RAFSIL1

Here, we describe an approach for RF based similarity learning (Breiman and Cutler, 2003; Shi and Horvath, 2006) that has been applied to various types of biomedical data (Seligson ; Ramirez ) and is implemented in the randomForest package for the R programming language (Liaw and Wiener, 2017). In Pouyan and Nourani (2017), the RAFSIL1 approach (without the feature construction step) was applied to Cytometry by Time of Flight (CyTOF) data, where protein expression of several marker genes (typically less than 50) is assessed. Next, we briefly summarize RF based similarity learning: To cast the unsupervised similarity learning problem into a problem suitable for RFs, a ‘synthetic’ dataset is generated, for instance by randomly shuffling the values of each feature independently; then, an RF classifier is trained to distinguish the shuffled data from the un-shuffled data ( in our notation). Let denote the i-th row of . If we assume the RF classifier contains N trees and define as the number of trees that classify cells and via the same leaf, then the RF based n × n similarity matrix is defined via . A corresponding dissimilarity matrix can then be obtained via . In the following, we use the term similarity and dissimilarity interchangeably, referring to and , respectively. Repeating this procedure B times allows us to aggregate individual similarity matrices into a final matrix and corresponding . We used B = 50 for our experiments.

2.3.2 RAFSIL2

We now describe how we use the RF classifier to construct (dis)similarity matrices without the need for synthetically generated datasets. The general idea, as in the above method, is to exploit feature dependence. However, we proceed as follows: After selecting a single feature j (the j-th column of the feature matrix ) we quantize its values to derive class labels for all cells. We use partitioning around medoids as implemented by the pamk function provided by the R package fpc (Hennig, 2018), which also estimates the optimal number of clusters. Then, we remove the j-th column from and use the RF classifier to learn the obtained class labels with this reduced dataset. The resulting RF then yields a similarity between cells as described above. Repeating this procedure for all features yields RF classifiers with corresponding similarity measures , and averaging as described for RAFSIL1 above results in a final pair of similarity and dissimilarity matrices and , respectively. As before, we use the randomForest package for R (Liaw and Wiener, 2017) with its default forest size of 500 trees.

2.4 Performance evaluation

To evaluate our approach, we apply RAFSIL1/2 to ten scRNA-seq datasets that have pre-annotated cell populations, and we compare results with current state of the art approaches. We distinguish three different scenarios, namely similarity learning, dimension reduction and clustering. All of these play critical roles in exploring, visualizing and interpreting scRNA-seq data, but they have different objectives and we evaluate them accordingly.

2.4.1 Similarity learning

For similarity learning, we compare our method with SIMLR (Wang b), the only scRNA-seq method that advertises similarity learning. In addition, we explored common similarity/dissimilarity measures: Euclidean distance, Pearson and Spearman correlation, applied to the full (ALL), frequency-filtered (FRQ) and highly-expressed (HiE) sets of genes (see Section 2.1 for details on the gene sets). Following Wang b) the metric we choose to evaluate similarity learning is the nearest neighbor error (NNE) (van der Maaten . The NNE is calculated by using a nearest neighbor classifier based on the target similarity to be evaluated: for a given set of labeled cells, an unlabeled cell is classified with the same label as its most similar labeled neighbor. Predictions for each cell are obtained via 10-fold cross-validation (CV), and the NNE then reports the fraction of mis-classified cells. Because in the 10-fold CV procedure data are randomly split into 10 folds (9 for training, 1 for validation) we report averages over 20 runs. The NNE is a direct reflection of how well the learned dissimilarity measure captures the pre-annotated class labels. For SIMLR we used the SIMLR R package (Wang ), provided it with all genes (ALL) and evaluated the similarity matrix returned by the SIMLR function with default options. For SIMLR we needed to provide the option normalize = TRUE for the Treutlein dataset, otherwise the program would abort. We have indicated this by putting the respective values in parentheses in the relevant result tables.

2.4.2 Dimension reduction

To evaluate the results of dimension reduction, we use the same NNE metric as for evaluating similarity (see above), but in this case applied to the reduced-dimensional projection. That is, we first perform similarity learning. Then we use the resulting similarity matrix as input for a dimension reduction algorithm, which sees each cell as a vector of its’ similarities. Finally we calculate the NNE based on Euclidean distance in the reduced-dimensional space. For all methods, we choose two as the number of dimensions to project down to, and we compared the following approaches for dimensionality reduction: t stochastic neighbor embedding (tSNE; van der Maaten and Hinton, 2008), PCA and probabilistic PCA (pPCA; Tipping and Bishop, 1999). We also skip the similarity learning step and directly apply dimension reduction to cells characterized by their highly expressed genes (Data-HiE in Table 3). For pPCA we used the implementation provided by the pcaMethods R package (Stacklies ; Kiselev ) and for tSNE the Rtsne R package (Krijthe, 2015). We used tSNE with default values for all datasets except Treutlein, where we set the perplexity to 20.
Table 3.

Nearest neighbor error values for dimension reduction (in percent, lower is better)

MethodPatelButtenerEngelKolodGoolamUsoskinTreutleinLengPollenLinAverage
RAFSIL1-tSNE1.93.80.50.04.01.07.54.12.75.53.1
RAFSIL1-PCA8.14.411.30.09.721.512.526.512.624.913.2
RAFSIL1-pPCA7.74.411.30.09.722.515.025.412.324.413.3
RAFSIL2-tSNE1.92.70.00.04.80.66.24.64.09.23.4
RAFSIL2-PCA10.26.65.90.04.85.612.525.916.333.112.1
RAFSIL2-pPCA9.87.14.90.04.05.311.226.314.330.611.4
SIMLR-tSNE3.73.34.40.04.85.5(26.2)a19.83.015.78.6
SIMLR-PCA6.72.227.10.111.36.4(43.8)36.322.951.020.8
SIMLR-pPCA7.42.227.60.19.75.9(45)37.022.353.221.0
Data-HiE-tSNE7.412.114.30.31.63.715.037.03.310.710.5
Data-HiE-PCA40.725.813.31.44.834.131.256.116.340.526.4
Data-HiE-pPCA40.528.614.31.47.333.332.557.417.341.527.4
Euclidean-HiE-tSNE4.44.43.90.46.58.223.839.15.321.111.7
Euclidean-HiE-PCA36.57.735.07.025.858.732.552.819.939.131.5
Euclidean-HiE-pPCA36.08.839.46.828.257.732.553.020.338.832.2
Pearson-HiE-tSNE2.89.33.00.01.62.117.524.12.320.18.3
Pearson-HiE-PCA25.123.116.30.12.427.812.549.110.627.919.5
Pearson-HiE-pPCA24.023.117.20.32.427.515.047.611.328.119.7
Spearman-HiE-tSNE3.311.01.00.00.83.25.015.43.018.46.1
Spearman-HiE-PCA37.226.99.40.30.833.45.061.713.032.322.0
Spearman-HiE-pPCA36.327.512.80.33.232.56.259.112.630.622.1

tSNE, t stochastic neighbor embedding; PCA, principal component analysis; pPCA, probabilistic PCA.

The best-performing method in each column is in boldface.

Parentheses indicate that SIMLR was run with different parameters for this dataset.

2.4.3 Clustering

We also evaluate the performance of RAFSIL1/2 in the context of clustering; that is, we ask how well group structure inferred based on RAFSIL1/2 similarities agrees with pre-annotated cell populations. This allows us to expand the methods we compare RAFSIL with, because in addition to the approaches we compared with for similarity learning and dimension reduction, we can now add algorithms that have no explicit similarity learning step. Specifically, we add SC3 (Kiselev ), pcaReduce (Žurauskienė and Yau, 2016, 2015) and SINCERA (Guo, 2017; Guo ) to our comparisons. These methods, and SIMLR, are geared towards scRNA-seq clustering, and we provide each method with the number of pre-annotated populations for each dataset and the expression profiles comprising the complete set of expressed genes (ALL). For RAFSIL1/2 and Spearman correlation we implemented two clustering strategies. First, using similarities as a vector embedding for each cell, we run k-means clustering (KM) to infer group labels. Second, we perform hierarchical clustering with average linkage (HC) using learned dissimilarities ( for Spearman correlation). For k-means clustering we use kmeans++ as provided by the R package pracma (Borchers, 2017), while for hierarchical clustering we use the base functionality provided within R through the stats package (R Core Team, 2017). Like for the other methods, we set the number of clusters to the known number of different cell labels (Kiselev ). To evaluate clustering results, we calculate two performance metrics: the adjusted Rand index (ARI) and normalized mutual information (NMI). Both of them are popular metrics to evaluate clustering results in the context of a known labeling in single cell data (Hubert and Arabie, 1985; Kiselev ; Vinh ; Wang b). The ARI is defined as follows: assume, we cluster n cells into k clusters. Let denote the inferred cluster labels, and the pre-annotated labeling. Then where l and s enumerate the k clusters, and and with the indicator function that is one for x = y and zero otherwise. The ARI is one if the inferred labels correspond perfectly to the known labels, and it decreases with increasing disagreement. For the NMI, let and and . Then and are the respective entropies of the two clusterings, and is their mutual information. The NMI is then defined as . Like the ARI the NMI is one for perfectly overlapping clusterings, and it decreases with increasing disagreement. It is bounded by zero from below. For ARI and NMI we report median values over 20 clustering runs in our clustering evaluation. We also evaluate clustering results after dimension reduction. To do so, we build on the results from evaluating dimension reduction with the NNE (see Section 2.4.2). For each similarity learning approach we assess the corresponding dimension reduction method with the smallest NNE and then perform standard k-means and hierarchical clustering in reduced dimensions. Results are then evaluated as described above. However, here, we use Pearson correlation and not Spearman correlation as a representative for generic similarity learning, because it performs slightly better (see Table 3).

2.5 Data used and software availability

Datasets used in the majority of our analyses are summarized in Table 1. Patel, Pollen, Goolam and Treutlein datasets were downloaded from https://hemberg-lab.github.io/scRNA.seq.datasets/; Usoskin, Buettner and Kolod datasets were downloaded from https://github.com/BatzoglouLabSU/SIMLR. The Engel and Lin datasets can be found in the supporting material of Lin and were downloaded from http://128.2.210.230:8080/; the label ‘Lin’ in our result tables refers to the combination of three primary datasets described in the Section 2 there. Finally, the Leng dataset was obtained from https://bioinfo.uth.edu/scrnaseqdb/.
Table 1.

List of datasets analyzed and their attributes

DatasetNumber of cellsNumber of genesNumber of populationsSparsity (in %)UnitsReferences
Patel430594850TPMPatel et al. (2014)
Buettener1829573337FPKMBuettner et al. (2015)
Engel20321 690480TPMEngel et al. (2016)
Kolod70413 473310CPMKolodziejczyk et al. (2015)
Goolam12441 480569CPMGoolam et al. (2016)
Usoskin62217 772478RPMUsoskin et al. (2015)
Treutlein8023 271590FPKMTreutlein et al. (2014)
Leng46019 084447TPMLeng and Kendziorski (2015)
Pollen30199661167TPMPollen et al. (2014)
Lin40294371643TPMLin et al. (2017)
List of datasets analyzed and their attributes For our analysis underlying Figure 1, the Usoskin and Kolod datasets were re-downloaded to obtain normalized expression values without batch corrections. For Usoskin, data were downloaded this information from the ‘External resource Table 1’, available at http://linnarssonlab.org/drg/; for Kolod, data were downloaded from https://www.ebi.ac.uk/teichmann-srv/espresso/.
Fig. 1.

RAFSIL2 discovers unwanted variation. This figure shows tSNE plots for two datasets: data from Usoskin in the first row, and from Kolodziejczyk in the second row. Cells are colored according to biologically meaningful annotations in panels one and three, and according to technical covariates in panels two and four. In both datasets biological annotations are different cell types. Technical covariates are different picking sessions (first row) and different sequencing chips (second row). In the first row, we see that sub-structure in biologically meaningful groupings can be explained through technical variables for both methods. In the second row, this still holds true for RAFSIL2, but SIMLR does not highlight the unwanted technical variation present in the data (for more details see Section 3.2.2).

RAFSIL2 discovers unwanted variation. This figure shows tSNE plots for two datasets: data from Usoskin in the first row, and from Kolodziejczyk in the second row. Cells are colored according to biologically meaningful annotations in panels one and three, and according to technical covariates in panels two and four. In both datasets biological annotations are different cell types. Technical covariates are different picking sessions (first row) and different sequencing chips (second row). In the first row, we see that sub-structure in biologically meaningful groupings can be explained through technical variables for both methods. In the second row, this still holds true for RAFSIL2, but SIMLR does not highlight the unwanted technical variation present in the data (for more details see Section 3.2.2). The RAFSIL R package is available at www.kostkalab.net/software.html.

3 Results

3.1 A random forest based approach for single cell similarity learning

Here we present RAFSIL, a RF based approach for learning similarities from single cell RNA-sequencing data. RF based similarity learning (Shi and Horvath, 2006) is a way to apply RFs (Breiman, 2001) to unsupervised learning and derive similarities between objects (Breiman and Cutler, 2003; Shi and Horvath, 2006). In particular, RF based similarity learning is robust to outliers and has built-in feature selection, which is appealing for analyzing high-dimensional and noisy data, like single cell RNA sequencing profiles. We also note that this approach is fundamentally different from ensemble approaches working with multiple clusterings of a dataset, see (Yan , Section 3). To apply RF based similarity learning to single cell RNA sequencing (scRNA-seq) data, we implemented an approach we call RAFSIL. It is a two-step procedure, where in the first step we pre-process scRNA-seq expression data (feature construction step) and in a second step then perform RF-based similarity learning (similarity learning step). The feature construction step is a heuristic approach designed to deal with the noise and sparsity typically present in scRNA-seq data (Yuan ). Briefly, we first find an orthogonal sub-space decomposition of the input space of cells, and then we describe each cell by its ‘local’ similarities to other cells in each sub-space separately, which we then aggregate to a final feature set. Details on the feature construction step are in Section 2.2. For the RF-based similarity learning step we explore two different approaches: RAFSIL1 and RAFSIL2. RAFSIL1 is a straight forward application of the methodology of Shi and Horvath (2006) to learn similarities between single cells described by the features recovered in our feature construction step. The general idea is to use RF to discriminate between the real and a synthetic dataset, where the latter is derived from the real data by applying perturbations that destroy feature correlations. Similarity between cells is then quantified by co-classification of pairs of cells via the same leaf across trees in the RF. For RAFSIL2, we apply RFs to unsupervised learning in a different way. For each feature, we quantize its values to derive class labels for cells, and then use the other features to predict these labels using a RF. Similarity is then quantified in the same way as described before. Details about RAFSIL1 and RAFSIL2 are in Sections 2.3.1 and 2.3.2. In the following, we show that RAFSIL1/2 compare favorably with current approaches across a variety of scenarios. We also show how the method enables identification of unwanted technical variation in scRNA-seq datasets.

3.2 Similarities learned by random forests accurately characterize single cell RNA sequencing data

We applied RAFSIL1 and RAFSIL2 to a diverse collection of single cell RNA sequencing datasets (Table 1) and compared their performance with state-of-the-art approaches. In our analyses, we distinguish three scenarios: similarity learning, dimension reduction and clustering. For similarity learning, we evaluate how well inferred pairwise similarities characterize pre-annotated cell populations (i.e. class labels for cells). For dimension reduction, we use the inferred similarities as features and project each cell into two dimensions. We then evaluate how well the resulting euclidean distances between projected cells characterize pre-annotated cell populations. Finally, we also evaluate how accurately inferred similarities allow clustering algorithms to reproduce available class labels; we apply clustering algorithms to two settings: the originally inferred similarities, and similarities in reduced-dimensional projections inferred by dimension reduction approaches.

3.2.1 Similarity learning

We applied our RAFSIL algorithms to ten datasets (see Table 1), where labels for cell populations have been pre-annotated. We assess the learned similarities in terms of the NNE, which is the mis-classification rate of a nearest neighbor classifier (see Section 2.4.1 for details). We compare RAFSIL1/2 to SIMLR (Wang b), which performs similarity learning specifically for scRNA-seq data, and to (dis)similarities as assessed by Euclidean distance, Spearman and Pearson correlation. For the latter three we assess three gene selection strategies: ALL, FRQ and only HiE; see Section 2.4 for a more detailed description. Results are summarized in Table 2. We see that RAFSIL1/2 and SIMLR learn similarities that accurately characterize annotated cell populations (i.e. they have low NNE). We also find that RAFSIL and SIMLR substantially outperform Euclidean distance and the two correlation-based similarities, and that RAFSIL2 shows the best overall performance. For the Euclidean distance and the correlation-based approaches we also observe that focusing on highly expressed genes improves performance for all of them.
Table 2.

Nearest neighbor error values for similarity learning (in percent, lower is better)

MethodPatelButtenerEngelKolodGoolamUsoskinTreutleinLengPollenLinAverage
RAFSIL11.63.81.00.02.42.610.05.03.74.73.5
RAFSIL21.43.80.00.03.20.86.24.14.35.22.9
SIMLR2.41.63.40.043.1(25)a14.836.26.0
Pearson-ALL1.957.738.99.73.210.520.049.612.314.421.8
Pearson-FRQ2.158.242.410.42.47.212.542.810.314.720.3
Pearson-HiE3.533.515.39.81.64.711.248.56.310.414.5
Spearman-ALL2.857.712.80.90.815.128.858.72.013.719.3
Spearman-FRQ1.957.710.30.90.810.18.844.61.713.215.0
Spearman-HiE14.443.49.91.82.47.410.029.15.38.513.2
Euclidean-ALL30.051.648.324.72.414.521.244.66.022.426.6
Euclidean-FRQ2.157.739.910.52.47.412.545.99.313.720.1
Euclidean-HiE4.033.513.88.81.63.712.547.47.010.714.3

ALL, all expressed genes; FRQ, frequency-filtered genes; HiE, highly-expressed genes.

The best-performing method in each column is in boldface.

Parentheses indicate that SIMLR was run with different parameters for this dataset.

Nearest neighbor error values for similarity learning (in percent, lower is better) ALL, all expressed genes; FRQ, frequency-filtered genes; HiE, highly-expressed genes. The best-performing method in each column is in boldface. Parentheses indicate that SIMLR was run with different parameters for this dataset.

3.2.2 Dimension reduction

We performed dimension reduction on the learned similarities obtained from RAFSIL1/2, and compared results with the same methods used in the previous section: SIMLR and Euclidean distance, as well as Spearman and Pearson correlation. We again use the NNE as a quality metric (on Euclidean distances in the reduced-dimensional space, for all methods), and results are summarized in Table 3. As a baseline approach we also included dimension reduction directly on the expression data (Data in Table 3); this is different from the other methods, where we apply dimension reduction to cells described by their similarities with other cells (see Section 2.4.2). Nearest neighbor error values for dimension reduction (in percent, lower is better) tSNE, t stochastic neighbor embedding; PCA, principal component analysis; pPCA, probabilistic PCA. The best-performing method in each column is in boldface. Parentheses indicate that SIMLR was run with different parameters for this dataset. We observe that dimension reductions obtained using tSNE (van der Maaten and Hinton, 2008) perform better (on average) than those obtained with PCA or pPCA. Interestingly, we find that (dis)similarities in the reduced-dimensional space perform almost always better than in the original (dis)similarities (see Table 3). The main exception is RAFSIL2, which performs better using original similarities. We again see that approaches designed for scRNA-seq typically outperform more generic methods, and RAFSIL1 and RAFSIL2 have lower NNE compared with SIMLR. We note that Spearman correlation on highly-expressed genes, followed by tSNE, has good average performance comparable with RAFSIL1/2 and SIMLR. We also visualize results from similarity learning and dimension reduction in Supplementary Figure S1. We find clear differences in the inferred similarities between methods for some datasets (especially for Leng and Usoskin, but also for Buettner), and this is reflected in the respective two-dimensional projections. Overall, RAFSIL and SIMLR are able to more clearly separate cell populations compared with Euclidean distance and Spearman correlation. Also, we note that the good performance of RAFSIL2 (in terms of NNE, see Table 3) is clearly reflected, probably most pronounced for the Leng dataset. Overall, this shows that RAFSIL2 can improve the visualization (and therefore discovery) of group/population structure in scRNA-seq data. In practice, dimension reduction is typically used for exploratory data analysis, for instance to find group structure in the data that might correspond to novel (sub)populations of cells. However, it can also be a valuable tool for data quality control, for instance when color coding additional information about cells (covariates) in a two-dimensional projection of the data. Figure 1 demonstrates this approach. The first row depicts tSNE plots for the Usoskin dataset, with RAFSIL2 projections in the first two panels and SIMLR projections in panels three and four. Color coding each cell with biological labels (four principal neuronal types) we see a clear separation with both approaches (panels one and three), but with substantial structure inside each neuronal cell type. Panels two and three reveal that this structure is likely a technical artifact. In these panels, we color the cells according to a technical variable (different cell picking sessions). For both approaches, RAFSIL and SIMLR, we clearly see that the perceived sub-structure in different neuronal types can largely be explained by the picking session. For clarity, we have annotated one cell type (tyrosine hydroxylase containing neurons) in panels one and three with the colors of the technical annotation in the adjacent plot that correlate with prominent sub-clusters. The second row in Figure 1 is set up in the same way, just this time using the data of Kolodziejczyk . Here, the biological color coding corresponds to different culturing conditions of mouse embryonic stem cells, while the technical variable denotes different sequencing chips. In the RAFSIL representation (panels one and two), we again see sub-structure in the biological annotation that perfectly corresponds to technical annotation (different sequencing chips). For this dataset SIMLR also recapitulates the biological group structure (panel three), but does not pick up the presence of confounding technical variation (panel four). In summary, Figure 1 shows that RAFSIL can detect unwanted technical variation in scRNA-seq data, also in cases where other methods do not. We note that in both publications the authors corrected for batch effects, and we have used the uncorrected data for these analyses. In practice, this type of approach is mainly useful to assess if corrections for known technical factors are successful, or to rule out that discovered group structure corresponds to known covariates. Also, we note that the choice of dimension reduction technique plays a role in these analyses; for instance, when using PCA instead of tSNE things become considerably less clear (data not shown). However, this is not unexpected given the good performance of tSNE as a dimension reduction method (see Table 3).

3.2.3 Clustering

Next, we explored the performance of RAFSIL1/2 in terms of cell clustering, which is commonly used to discover population/group structure in scRNA-seq data and constitutes an essential step for most analyses in this field. To do so, we used the dissimilarities learned by RAFSIL1/2 in two ways: (i) to perform hierarchical clustering of cells (HC) and (ii) as input for k-means clustering (KM), taking each cell as a vector of its similarities with all cells in the dataset. We use the ARI and NMI as quality measures (see Section 2.4.3 for details), and results are summarized in Table 4. As before, we compared RAFSIL1/2 to SIMLR and Spearman correlation, and added the direct application of HC and KM to the expression data (Data in Table 4). Because there are more methods for clustering scRNA-seq data than for similarity learning, we added additional comparisons with SC3, SINCERA and pcaReduce that do not implement similarity learning but perform clustering.
Table 4.

ARI and NMI values for clustering methods across ten datasets (in percent, higher is better)

Patel
Buettner
Engel
Kolod
Goolam
Usoskin
Treutlein
Leng
Pollen
Lin
Average
MethodARINMIARINMIARINMIARINMIARINMIARINMIARINMIARINMIARINMIARINMIARINMI
RAFSIL1-KM89.688.493.590.527.747.0100.0100.054.473.576.977.834.859.249.763.284.492.051.973.666.376.5
RAFSIL1-HC95.894.390.487.134.646.3100.0100.091.490.675.073.054.368.943.458.485.193.653.176.772.378.9
RAFSIL2-KM88.587.581.676.675.876.8100.0100.054.473.564.775.455.372.439.150.182.691.849.272.569.177.7
RAFSIL2-HC97.095.584.380.693.492.6100.0100.091.490.692.690.088.982.436.753.091.695.554.781.283.186.1
SIMLR80.984.988.888.810.625.7100.0100.047.165.566.072.8(23.8)a(45.6)24.034.484.492.242.274.256.868.4
SC398.998.488.786.146.064.2100.0100.054.473.584.581.654.363.132.855.595.895.358.882.171.480.0
pcaReduce47.860.339.845.917.418.296.194.245.962.254.760.437.638.621.725.589.193.151.374.450.157.3
SINCERA91.389.850.747.623.031.199.699.239.358.052.461.727.850.58.712.385.593.445.569.452.461.3
Spearman-HiE-KM35.046.225.433.367.763.645.751.264.780.328.435.462.274.75.610.080.489.246.471.746.155.6
Spearman-HiE-HC20.244.80.12.147.053.00.10.659.176.10.31.364.171.20.32.79.538.325.868.822.735.9
Data-HiE-KM78.175.638.542.215.117.963.175.342.348.028.937.018.933.43.413.971.284.951.876.541.150.5
Data-HiE-HC20.436.94.517.110.411.80.20.833.541.35.09.432.837.7−0.60.87.935.98.942.412.323.4

KM, k-means; HC, hierarchical clustering.

The best-performing method in each column is in boldface.

Parentheses indicate that SIMLR was run with different parameters for this dataset.

ARI and NMI values for clustering methods across ten datasets (in percent, higher is better) KM, k-means; HC, hierarchical clustering. The best-performing method in each column is in boldface. Parentheses indicate that SIMLR was run with different parameters for this dataset. We see that domain-specific methods for scRNA-seq clustering perform well, and that RAFSIL2 (using hierarchical clustering) has the best average performance, with SC3 and RAFSIL1-KM performing better for some datasets (Buettner, Patel and Leng). Interestingly, k-means clustering appears to perform better when directly applied to the data or in the context of Spearman correlation, while hierarchical clustering works better for RF derived distances. Motivated by our previous result of decreased NNE for reduced-dimension embeddings obtainable by tSNE, we applied clustering after dimension reduction for the methods we studied before (clustering-only approaches do not allow for dimension reduction). Results are summarized in Table 5, please see Section 2.4.3 for details on the Methods. Like before, we observe an overall better performance of clustering when using data with reduced dimensionality, again with the exception of RAFSIL2, which performs better in high dimensions. Also, comparing clustering results with similarity learning results, we find that using the original dissimilarity matrix RAFSIL2 had the smallest NNE and also the best clustering performance; for reduced dimensions, RAFSIL1 has the smallest NNE and also shows the best clustering performance. We finally note that the fact that RAFSIL2 performs worse than RAFSIL1 in this scenario is driven by its poor performance on the Kolod dataset. This relates to our previous discussion of Figure 1: batch effect removal may not have been successful for this dataset, and RAFSIL2’s clustering performance reflects the situation depicted in the first panel of the second row, where cell groupings induced by cell picking session dominate biological variation.
Table 5.

ARI and NMI values for clustering methods across ten datasets after dimension reduction (in percent, higher is better)

Patel
Buettner
Engel
Kolod
Goolam
Usoskin
Treutlein
Leng
Pollen
Lin
Average
MethodARINMIARINMIARINMIARINMIARINMIARINMIARINMIARINMIARINMIARINMIARINMI
RAFSIL1-tSNE-KM96.895.793.590.544.755.9100.0100.054.473.561.771.533.858.246.461.189.094.948.674.166.977.5
RAFSIL1-tSNE-HC93.491.893.590.526.646.3100.0100.054.473.564.675.854.870.946.662.489.294.950.176.367.778.2
RAFSIL2-tSNE-KM97.596.387.383.126.646.334.941.754.473.565.577.155.072.448.760.088.093.342.171.260.071.5
RAFSIL2-tSNE-HC97.596.387.585.024.845.130.938.954.473.565.978.555.872.530.946.287.593.348.873.658.470.3
SIMLR-tSNE-KM90.889.688.888.810.625.7100.0100.047.165.566.073.4(27.3)a(30.0)47.165.582.490.541.371.860.170.1
SIMLR-tSNE-HC80.984.988.888.810.625.7100.0100.047.165.566.073.4(40.7)(41.7)47.765.572.588.442.174.259.670.7
Data-tSNE-KM71.572.233.433.018.018.992.690.035.852.684.980.231.655.116.726.482.388.854.478.552.159.6
Data-tSNE-HC66.467.325.629.724.129.359.263.445.962.780.474.540.552.71.89.594.393.455.077.549.356.0
Pearson-tSNE-KM88.586.229.233.628.935.4100.0100.058.274.064.966.640.262.06.910.978.691.148.173.054.463.3
Pearson-tSNE-HC87.585.327.335.633.551.1100.0100.048.571.263.666.053.365.214.717.284.392.842.972.755.565.7

The best-performing method in each column is in boldface.

Parentheses indicate that SIMLR was run with different parameters for this dataset.

ARI and NMI values for clustering methods across ten datasets after dimension reduction (in percent, higher is better) The best-performing method in each column is in boldface. Parentheses indicate that SIMLR was run with different parameters for this dataset. To assess the robustness of clustering solutions, we randomly excluded 10% of cells from each dataset and re-ran each clustering approach 20 times. Figure 2 summarizes the results. We see substantial variability in the ARI for most datasets and most methods across re-sampling runs; in terms of performance as measured by ARI averaged across datasets, RAFSIL2 (with hierarchical clustering) performs best with SC3 coming in second. This is consistent with our previous results obtained with the full data (see Table 4). Next, we looked at variability and calculated the interquartile range (IQR) across res-sampling runs for each method analyzing each dataset, and then averaged across datasets (aIQR). SC3 exhibits the most stable clustering solutions (5% aIQR); RAFSIL2-HC is a bit worse with 7% aIQR, but a bit better than SIMLR, which has 8% aIQR. The method pcaReduce performs worst in terms of stability with an aIQR of 14%. Overall, we find that RAFSIL produces relatively stable clustering solutions with good ARI.
Fig. 2.

RAFSIL2 yields accurate and robust clustering solutions. Panels are box plots of the ARI for ten datastes, across 20 instances of randomly sampling 90% of available cells. The panel labeled ‘Average’ represents the mean performance across all ten datasets. We see that RAFSIL2 followed by hierarchical clustering has the best performance, followed by SC3 and then the other RAFSIL-type methods. In terms of robustness SC3 performs best, while pcaReduce shows the highest variability (see Section 3.2.3 for a more detailed discussion). KM, k-means; HC, hierarchical clustering; HiE, highly expressed genes.

RAFSIL2 yields accurate and robust clustering solutions. Panels are box plots of the ARI for ten datastes, across 20 instances of randomly sampling 90% of available cells. The panel labeled ‘Average’ represents the mean performance across all ten datasets. We see that RAFSIL2 followed by hierarchical clustering has the best performance, followed by SC3 and then the other RAFSIL-type methods. In terms of robustness SC3 performs best, while pcaReduce shows the highest variability (see Section 3.2.3 for a more detailed discussion). KM, k-means; HC, hierarchical clustering; HiE, highly expressed genes. Here, we ask whether RAFSIL can estimate the number of populations present in a scRNA-seq dataset. Briefly, we apply RAFSIL1/2 followed by hierarchical clustering (RAFSIL1/2-HC) and retrieve the corresponding series of cell partitions with increasing cluster numbers. To those we apply the Calinski–Harabasz criterion (Calinski and Harabasz, 1974), where each cell is described by its corresponding row in the scaled feature matrix (see Section 2.2). We compared RAFSIL with SC3 and SINCERA in Supplementary Table S1. We find that RAFSIL1/2 perform well (RAFSIL2-HC is amongst the most accurate methods for the most datasets), but overall there is little difference between the approaches. In addition to the analyses described above, we also compared our method to the neural network based approach of Lin . Lin et al. provide the data they used to assess their method, so we calculated performance metrics for RAFSIL1/2 and SC3 (without any gene filtering, to be consistent with the authors) and compared them to Table 2 from Lin . Results are shown in Supplementary Table S2, where everything except the RAFSIL1/2 and SC3 lines has been taken from their publication. We see that the RAFSIL approaches (especially RAFSIL2) are competitive with the NN based approach, even though we do not make use of a supervised training phase. We also studied the clustering performance of RAFSIL1/2 performing only the feature construction step, and only the similarity learning step, respectively. Results are summarized in Supplementary Table S3. We see that RAFSIL1/2 outperform these ‘reduced’ approaches, highlighting the value of both of these steps in our approach. Nevertheless, feature construction alone followed by k-means clustering also performs well overall.

4 Discussion and conclusion

We have presented RAFSIL, a two-step approach for learning similarities between single cells based on whole transcriptome sequencing data. Accurately inferring such similarities is an important step in single cell RNA sequencing studies, because they form the basis for identification, visualization and interpretation of group structure. And reliable and accurate inference of group structure is necessary for discovery of new (sub)types of cells, for improved characterization and understanding of existing types of cells, for decoding the cellular composition healthy (and abnormal) tissue types, and more. We analyzed a diverse collection of datasets and show that RAFSIL performs well in similarity learning, on average outperforming SIMLR (to our knowledge the only other similarity learning approach geared specifically towards the scRNA-seq domain) as well as several generic approaches. In addition, the SIMLR algorithm requires a known (or pre-determined) number of clusters to calculate similarities, but reasonable estimates are not always available in practice. RAFSIL has no such requirement. We also show that RAFSIL similarities improve dimension reduction and data visualization, and that they can be used to discover unwanted technical variation in single cell RNA sequencing datasets. Finally, comparing clustering solutions obtained with RAFSIL similarities with state-of-the-art methods, we show that RAFSIL2 followed by hierarchical clustering is highly competitive, outperforming all other methods on average, and also individually on most datasets we studied. RAFSIL implements a two-step procedure, first feature construction, and then similarity learning using random forests (RFs); it is flexible and easy to modify, expand and optimize. Our current feature construction step is a heuristic that reflects what we found to work well with scRNA-seq data we studied, but it is meant to be adapted as technology (and methodology) develops. For instance, including prior information about groups of genes (for example based on functional annotation databases) may improve performance. Likewise, we presented two strategies to apply RFs to unsupervised similarity learning (RAFSIL1 and RAFSIL2), but different approaches, perhaps more principled ones, can be imagined. Currently, the running time of RAFSIL algorithms is comparable to methods like SC3 and SIMLR, and datasets with on the order of thousand cells can be analyzed without any problems. However, a truly large scale implementation for datasets with hundreds of thousands of cells (or more) would be desirable and is one of our future research directions. Some limitations of our study include that, while we compared RAFSIL extensively, our work is not exhaustive and results are restricted to the data we analyzed. However, we cover a variety of scRNA-seq technologies and computational approaches, and exhaustive comparisons considering all combinations of reasonable choices for gene filtering, dimension reduction, and clustering across many datasets quickly become infeasible. Along the same lines, we report that dimension reduction improves similarity learning and clustering, but only study projection into two-dimensional spaces (k = 2). While exploring larger choices for k might in principle be worthwhile for some methods, the fact that tSNE performed clearly best in our analysis might argue against it. The reason is that tSNE is known to perform well for projection into two to three dimensions, but runs into problems for higher k (van der Maaten and Hinton, 2008). Further on, we (and others) compare methods based on performance metrics like averages over adjusted Rand indexes (aARI) or average NMI. However, our re-sampling experiment assessing robustness of clustering solutions (by repeatedly leaving out 10% of cells in a given dataset randomly) yields inter quartile ranges of the aARI between 5% and 14% (depending on the clustering method used). This implies that small performance differences are typically not robust to changing a small amount of cells in a dataset. While these values might be affected by the relatively small number of re-sampling runs (20), we believe it highlights the need for this type of analysis in the context of performance comparisons for single cell RNA-seq data methodology in general. To summarize, we presented RAFSIL, a random forest based approach for similarity learning from single cell RNA sequencing data. We show that it performs well on a variety of datasets and believe it will be a useful tool for bioinformatics researchers working in this domain. Click here for additional data file.
  25 in total

1.  Unbiased classification of sensory neuron types by large-scale single-cell RNA sequencing.

Authors:  Dmitry Usoskin; Alessandro Furlan; Saiful Islam; Hind Abdo; Peter Lönnerberg; Daohua Lou; Jens Hjerling-Leffler; Jesper Haeggström; Olga Kharchenko; Peter V Kharchenko; Sten Linnarsson; Patrik Ernfors
Journal:  Nat Neurosci       Date:  2014-11-24       Impact factor: 24.884

2.  Clustering Single-Cell Expression Data Using Random Forest Graphs.

Authors:  Maziyar Baran Pouyan; Mehrdad Nourani
Journal:  IEEE J Biomed Health Inform       Date:  2016-05-10       Impact factor: 5.772

Review 3.  Understanding development and stem cells using single cell-based analyses of gene expression.

Authors:  Pavithra Kumar; Yuqi Tan; Patrick Cahan
Journal:  Development       Date:  2017-01-01       Impact factor: 6.868

4.  Single-cell RNA-seq highlights intratumoral heterogeneity in primary glioblastoma.

Authors:  Anoop P Patel; Itay Tirosh; John J Trombetta; Alex K Shalek; Shawn M Gillespie; Hiroaki Wakimoto; Daniel P Cahill; Brian V Nahed; William T Curry; Robert L Martuza; David N Louis; Orit Rozenblatt-Rosen; Mario L Suvà; Aviv Regev; Bradley E Bernstein
Journal:  Science       Date:  2014-06-12       Impact factor: 47.728

5.  Global histone modification patterns predict risk of prostate cancer recurrence.

Authors:  David B Seligson; Steve Horvath; Tao Shi; Hong Yu; Sheila Tze; Michael Grunstein; Siavash K Kurdistani
Journal:  Nature       Date:  2005-06-30       Impact factor: 49.962

6.  Power analysis of single-cell RNA-sequencing experiments.

Authors:  Valentine Svensson; Kedar Nath Natarajan; Lam-Ha Ly; Ricardo J Miragaia; Charlotte Labalette; Iain C Macaulay; Ana Cvejic; Sarah A Teichmann
Journal:  Nat Methods       Date:  2017-03-06       Impact factor: 28.547

7.  Detecting macroecological patterns in bacterial communities across independent studies of global soils.

Authors:  Kelly S Ramirez; Christopher G Knight; Mattias de Hollander; Francis Q Brearley; Bede Constantinides; Anne Cotton; Si Creer; Thomas W Crowther; John Davison; Manuel Delgado-Baquerizo; Ellen Dorrepaal; David R Elliott; Graeme Fox; Robert I Griffiths; Chris Hale; Kyle Hartman; Ashley Houlden; David L Jones; Eveline J Krab; Fernando T Maestre; Krista L McGuire; Sylvain Monteux; Caroline H Orr; Wim H van der Putten; Ian S Roberts; David A Robinson; Jennifer D Rocca; Jennifer Rowntree; Klaus Schlaeppi; Matthew Shepherd; Brajesh K Singh; Angela L Straathof; Jennifer M Bhatnagar; Cécile Thion; Marcel G A van der Heijden; Franciska T de Vries
Journal:  Nat Microbiol       Date:  2017-11-20       Impact factor: 17.745

8.  Oscope identifies oscillatory genes in unsynchronized single-cell RNA-seq experiments.

Authors:  Ning Leng; Li-Fang Chu; Chris Barry; Yuan Li; Jeea Choi; Xiaomao Li; Peng Jiang; Ron M Stewart; James A Thomson; Christina Kendziorski
Journal:  Nat Methods       Date:  2015-08-24       Impact factor: 28.547

9.  Low-coverage single-cell mRNA sequencing reveals cellular heterogeneity and activated signaling pathways in developing cerebral cortex.

Authors:  Alex A Pollen; Tomasz J Nowakowski; Joe Shuga; Xiaohui Wang; Anne A Leyrat; Jan H Lui; Nianzhen Li; Lukasz Szpankowski; Brian Fowler; Peilin Chen; Naveen Ramalingam; Gang Sun; Myo Thu; Michael Norris; Ronald Lebofsky; Dominique Toppani; Darnell W Kemp; Michael Wong; Barry Clerkson; Brittnee N Jones; Shiquan Wu; Lawrence Knutsson; Beatriz Alvarado; Jing Wang; Lesley S Weaver; Andrew P May; Robert C Jones; Marc A Unger; Arnold R Kriegstein; Jay A A West
Journal:  Nat Biotechnol       Date:  2014-08-03       Impact factor: 54.908

10.  Heterogeneity in Oct4 and Sox2 Targets Biases Cell Fate in 4-Cell Mouse Embryos.

Authors:  Mubeen Goolam; Antonio Scialdone; Sarah J L Graham; Iain C Macaulay; Agnieszka Jedrusik; Anna Hupalowska; Thierry Voet; John C Marioni; Magdalena Zernicka-Goetz
Journal:  Cell       Date:  2016-03-24       Impact factor: 66.850

View more
  10 in total

1.  Single-cell RNA-seq clustering: datasets, models, and algorithms.

Authors:  Lihong Peng; Xiongfei Tian; Geng Tian; Junlin Xu; Xin Huang; Yanbin Weng; Jialiang Yang; Liqian Zhou
Journal:  RNA Biol       Date:  2020-03-01       Impact factor: 4.652

2.  Towards rapid prediction of drug-resistant cancer cell phenotypes: single cell mass spectrometry combined with machine learning.

Authors:  Renmeng Liu; Genwei Zhang; Zhibo Yang
Journal:  Chem Commun (Camb)       Date:  2019-01-10       Impact factor: 6.222

3.  Shared Differential Expression-Based Distance Reflects Global Cell Type Relationships in Single-Cell RNA Sequencing Data.

Authors:  Aidan Mcloughlin; Haiyan Huang
Journal:  J Comput Biol       Date:  2022-07-06       Impact factor: 1.549

4.  An Adaptive Sparse Subspace Clustering for Cell Type Identification.

Authors:  Ruiqing Zheng; Zhenlan Liang; Xiang Chen; Yu Tian; Chen Cao; Min Li
Journal:  Front Genet       Date:  2020-04-28       Impact factor: 4.599

5.  scNPF: an integrative framework assisted by network propagation and network fusion for preprocessing of single-cell RNA-seq data.

Authors:  Wenbin Ye; Guoli Ji; Pengchao Ye; Yuqi Long; Xuesong Xiao; Shuchao Li; Yaru Su; Xiaohui Wu
Journal:  BMC Genomics       Date:  2019-05-08       Impact factor: 3.969

6.  Single cell RNA sequencing of human microglia uncovers a subset associated with Alzheimer's disease.

Authors:  Marta Olah; Vilas Menon; Naomi Habib; Mariko F Taga; Yiyi Ma; Christina J Yung; Maria Cimpean; Anthony Khairallah; Guillermo Coronas-Samano; Roman Sankowski; Dominic Grün; Alexandra A Kroshilina; Danielle Dionne; Rani A Sarkis; Garth R Cosgrove; Jeffrey Helgager; Jeffrey A Golden; Page B Pennell; Marco Prinz; Jean Paul G Vonsattel; Andrew F Teich; Julie A Schneider; David A Bennett; Aviv Regev; Wassim Elyaman; Elizabeth M Bradshaw; Philip L De Jager
Journal:  Nat Commun       Date:  2020-11-30       Impact factor: 14.919

7.  Computational profiling of hiPSC-derived heart organoids reveals chamber defects associated with NKX2-5 deficiency.

Authors:  Wei Feng; Hannah Schriever; Shan Jiang; Abha Bais; Haodi Wu; Dennis Kostka; Guang Li
Journal:  Commun Biol       Date:  2022-04-29

8.  A Hybrid Clustering Algorithm for Identifying Cell Types from Single-Cell RNA-Seq Data.

Authors:  Xiaoshu Zhu; Hong-Dong Li; Yunpei Xu; Lilu Guo; Fang-Xiang Wu; Guihua Duan; Jianxin Wang
Journal:  Genes (Basel)       Date:  2019-01-29       Impact factor: 4.096

Review 9.  Automated methods for cell type annotation on scRNA-seq data.

Authors:  Giovanni Pasquini; Jesus Eduardo Rojo Arias; Patrick Schäfer; Volker Busskamp
Journal:  Comput Struct Biotechnol J       Date:  2021-01-19       Impact factor: 7.271

10.  SSRE: Cell Type Detection Based on Sparse Subspace Representation and Similarity Enhancement.

Authors:  Zhenlan Liang; Min Li; Ruiqing Zheng; Yu Tian; Xuhua Yan; Jin Chen; Fang-Xiang Wu; Jianxin Wang
Journal:  Genomics Proteomics Bioinformatics       Date:  2021-02-27       Impact factor: 7.691

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.