Literature DB >> 35283664

An Overview of Algorithms and Associated Applications for Single Cell RNA-Seq Data Imputation.

Zarrin Basharat1, Sania Majeed1, Humaira Saleem1, Ishtiaq Ahmad Khan1, Azra Yasmin1.   

Abstract

Single cell RNA-Seq technology enables the assessment of RNA expression in individual cells. This makes it popular in experimental biology for gleaning specifications of novel cell types as well as inferring heterogeneity. Experimental data conventionally contains zero counts or dropout events for many single cell transcripts. Such missing data hampers the accurate analysis using standard workflows, designed for massive RNA-Seq datasets. Imputation for single cell datasets is done to infer the missing values. This was traditionally done with ad-hoc code but later customized pipelines, workflows and specialized software appeared for this purpose. This made it easy to benchmark and cluster things in an organized manner. In this review, we have assembled a catalog of available RNA-Seq single cell imputation algorithms/workflows and associated softwares for the scientific community performing single-cell RNA-Seq data analysis. Continued development of imputation methods, especially using deep learning approaches, would be necessary for eradicating associated pitfalls and addressing challenges associated with future large scale and heterogeneous datasets.
© 2021 Bentham Science Publishers.

Entities:  

Keywords:  RNA-Seq; Single cell; algorithms; analysis; heterogeneity; imputation

Year:  2021        PMID: 35283664      PMCID: PMC8844944          DOI: 10.2174/1389202921999200716104916

Source DB:  PubMed          Journal:  Curr Genomics        ISSN: 1389-2029            Impact factor:   2.689


BACKGROUND

Single cell RNA sequencing (RNA-Seq) is a cutting-edge technique, introduced in 2009, that can dissect the cellular heterogeneity of a plethora of cells [1]. Single cell RNA-Seq plays a phenomenal role in the identification of specific markers of same cell type, fluctuating states of same phenotypic cells, intra-population heterogeneity at microscopic resolution, transcript dynamicity and cell to cell variability of transcriptome [2, 3]. It has facilitated the construction of an extensive atlas of phenotypically similar human cells [4] and paved the way for researchers to initiate the “The Human Cell Atlas” project [5, 6]. It aims to map and quantify all cell types in the body, which would be useful for diagnosis and disease treatment. Above all, single-cell study supports unbiasedness in diverse research areas, treatment of many diseases by unmasking the presence of rare sub-populations of cells (i.e. cancer stem cells), underlying mechanisms in common diseases (i.e. kidney diseases) [3], reconstruction of genetic lineage trajectories, embryonic development [7], evolution and genomic diversity of bacterial ecosystem [8], etc. All this is not without problems and single-cell sequencing has to deal with several challenges such as drop-out events and high level of noise because small amounts of RNA from a single-cell requires amplification, which is susceptible to damage, contamination and distortion [9]. Minimal expression may be read as a zero by computer and hence, loss of information impedes proper downstream analysis. To deal with this problem, computer programs based on logical and coherent algorithms are required for replacing missing or negligible values with substitute values, derived using certain formulas (either based on prior information or trained on dataset under study). This derivation of missing values and associated information is called imputation and is a critical component of single-cell data analysis. An algorithm is a defined set of clear and implementable instructions on a computer. Usually, it addresses a problem and pertains to providing a solution through computation. With an avalanche of data from sequencing platforms, algorithms and programs to address machine derived biological data challenges and solve problems in computational biology have been in the limelight. Single cell technology produces a bulk of data but the issue of missing data is there which obstructs accurate transcriptomic studies. Algorithms have been designed to address this shortfall and impute missing or drop out values. We aim to provide an overview of such algorithms, which could be useful for scientists working with single cell RNA-Seq.

LITERATURE SEARCH AND CONSPECTUS

We searched for ‘single cell’ and ‘imputation’ in Pubmed (dated 22 April, 2020). Inclusion and exclusion criteria are mentioned in (Fig. ). Imputation methods for inferring proteins from RNA-Seq and phylogenetic coupled genotype analysis were also eliminated from the study. Only the ones with imputation analysis specific to RNA-Seq data were retained and categorized into three major types according to Lahnemann et al. [10], (1) model based, (2) data smoothing and (3) data reconstruction (low-ranked matrix-based or deep learning) methods. Algorithms integrating several approaches and falling under more than one category (such as Seurat falling under random forest-based machine learning or low-ranked matrix-based method), were listed only once. Programs such as EnImpute [11], using an ensemble of methods for an output combined from several software was not listed. For chosen approaches, the full text was downloaded for each algorithm/tool and programming language, operating system (Windows/Unix), working link information was obtained. Method and implementation of the workflow was taken into account and acquired information was summarized. To the best of the authors knowledge, this is the first comprehensive review of single cell RNA-Seq software.

ALGORITHMS EMPLOYING MODEL-BASED APPROACH

The primary group of algorithms enforce model-based approach for inferring the data sparsity and hence, imputation. Such probabilistic models may or may not be able to differentiate amid technical and biological zeros. Usually, gene expression is imputed for technical ones if they are able to separate both. Eight such algorithms were identified listed in (Table ). The first model-based method specific to single cell RNA-Seq data was presented in a JMLR workshop in 2016 by the name of BISCUIT [12](Bayesian Inference for Single-cell ClUstering and ImpuTing). It is an implementation of Dirchlet process mixture model, to iteratively normalize and cluster imputation expression. This was the initial, wholly Bayesian model for grouping, normalizing and imputing single-cell expression data. Biological and technical variation was resolved without spike-in and gaussian was implemented for gene-cell distribution. Imputation was inferred using Gibbs sampling. Following pursuit, SAVER [13](Single-cell Analysis Via Expression Recovery) was reported, which coalesces information across genes to infer transcript counts. Adaptive shrinkage using a Poisson-Gamma or negative binomial model is used for the purpose. SAVER-X [14] (Single-cell Analysis Via Expression Recovery via harnessing eXternal data) is an extension of the program and uses a Bayesian approach coupled to an autoencoder. This makes learned analyses from UMI counts possible. Gene-gene relation information is transferred across heterogeneous data (varying conditions, species etc) to impute a new dataset. It gives uncertainty co-efficient but associated computational intensity makes it less useful for large datasets. It is now implemented as a web app as well. ScImpute [2] is a scalable method which performs imputation only on dropout entries by probability calculation of specified gene in similar cells. This is done by fitting a Gamma-Gaussian mixture model on cell clusters. It can analyze heterogeneous datasets and is robust but does not provide uncertainty quantification values and may oversmooth the data. The package utilizing this algorithm is Granatum [15]. scRecover [16], uses zero-inflated negative binomial (ZINB) regression for maximum likelihood-based expression imputation. It docks values with ScImpute and other algorithms (SAVER, MAGIC) for final imputation. scUnif [17] is a supervised learning method, employing Bayesian approach with expectation–maximization algorithm, coupled with Gibbs sampling technique. It analyzes single as well as bulk data. Dropout inference and deconvolution are concurrent in bulk data. VIPER [18] accomplishes iterative inference of imputation using scant set of neighboring cells. A nonnegative sparse regression model is used for the estimate of expression. It is computationally efficient but does not provide uncertainty co-efficients for imputation values. scGAIN [19] applies adversarial learning to construct generative network model for imputing. Generator and the discriminator networks are trained on batches of 128 cells in each round, followed by mask matrix formation. It identifies right entries for imputation and spawned data points, with characteristics analogous to existing data help infer data distribution. Mean expression determines zero values of single-cell data. bayNorm [20] applies a novel Bayesian approach for standardizing data features and deducing expression features. Informed data structures consolidate accuracy and sensitivity in differential expression study. It can be used for UMI and non-UMI based data. A likelihood function coupled with binomial model of mRNA transcript capture is utilized after scaling, enabling it to capture mean-variance and mean-dropout relationship. Generated transcript distributions (2D using point estimate from posterior or 3D using posterior distribution) resemble fluorescence in situ hybridization (FISH) detection of single molecules. It exhibits high scalability coupled with computational efficiency. It is also useful for heterogeneous data.

ALGORITHMS EMPLOYING DATA SMOOTHING APPROACH

Data smoothing is a technique to eradicate noise and filter out important patterns from a data set. Different models employ random, exponential smoothing or variants of these approaches. For single cell data imputation, smoothing is achieved through the identification of the nearest neighbors of a cell. The second class of algos for single cell data imputation employs this approach. Seven different algorithms and, if present, associated softwares were identified for imputation of single cell RNA-Seq data, using the smoothing approach listed in (Table ). In KNN-smoothing algorithm [21], transcript counts are unified and imputation is conducted via discreet smoothing or variance-stabilization of the expression profiles. It is scalable and applicable to heterogeneous datasets. For large datasets, having a higher number of similar cells, a modified version of the approach called KNN-smoothing 2 is implemented. In this method, slightly smoothed data from nearby cells are projected onto first principal components enabling the differentiation of heterogeneous data. DrImpute [22] estimates dropout events using a hot deck, matrix construction method. To swiftly process large datasets, it does not compute large cell-cell distance matrices but instead uses sampling-based algorithm. The CellBench software (available at: https://github.com/shians/ cellbench) [23] implements KNN-smooth and DrImpute. Output is delivered in tabular form. MAGIC / Markov Affinity-based Graph Imputation of Cells [24] has the capability to impute complex and non-linear contacts of neighboring cells while retaining clusters and data structure. This augments group interactions of cells and genes (2D as well as 3D interactions). This method is computationally efficient, however, it does not provide uncertainty measurement and projection of data on low dimensional space, causing it to lose variability across cells. Moussa and Mandiou [25], later introduced an iterative algorithm, called LSImpute, based on previous algorithms. Instead of keeping a fixed quantity of nearest cells for imputation, numbers are altered based on least similarity threshold. Clusters of cells were formed based on median and mean values of neighbouring cells (n=1-10 cells per round). Clusters are then assembled in corresponding centroids and these are added to the previous unaccounted cells. The procedure is repeated using Cosine similarity metric of Hornik et al. [26] or Jaccard (available at http://cnv1.engr.uconn.edu:3838/LSImpute/) for each iteration, with a set high or low threshold (0.65-0.95). Similar results have been obtained for both metrics. This demonstrates that median imputation is disposed to a conformist approach and provides improved performance by minimizing dropout effects, decreasing data sparsity, reducing spurious expression and overimputation. 2DImpute [27] is another workflow that detects co-expression signatures by means of unsupervised ‘attractor metagene’ algorithm [28] i.e. it does not require knowledge of the preceding number of cell subpopulations. It also does not make random assumptions of statistical methods for inferring expression. Spurious or dropout-suspected events are distinguished from true biological zeros using Jaccard distance matrix. Imputation is done by leveraging correlation among gene-gene and cell-cell (Inter or intra-cell) relationship. scNPF [29] takes into account cell-cell and gene-gene interactions through a network-based propagation and fusion approach. Previous knowledge is combined with topology of network (through random walk simulation). Initial expression signal is smoothed and diffused through network, denser propagated matrix and better values are obtained for expression. Two modes of network propagation based on Random Walk with Restart (RWR), including the priori mode (using public molecular networks as base and retaining top 10% interactions) and the context mode (utilizing WCGNA package) [30] are utilized. Context mode relies solely on available RNA-Seq data and no priori interaction network is employed. Multiple networks are then fused to obtain a useful expression network based on shared and complementary network knowledge. netImpute [31] utilizes RWR method to fine-tune the gene expression of a specified cell, using gene-gene, protein-protein and cell-cell interaction network for imputing expression. Although this method has similar roots as other smoothing algos, network selection and diffusion methods differ, which lead to variation in performance. Application of log transformation (with added pseudo count value to avoid infinite values) minimizes the impact of a very large values in data. Another recently developed G2S3 method [32], infers scant signals and builds gene graph network for imputation. Expression levels are smoothed using non-linear correlation and graph is optimized. After this, random walk aimed transition matrix is generated and gene expression level is imputed through weighed average expression levels of gene network in the graph.

ALGORITHMS EMPLOYING DATA RECONSTRUCTION APPROACH

The third algorithmic approach initially pinpoints a latent space rendering of the cells, by capturing linear associations (low-rank matrix-based methods) or non-sequential relationships (deep-learning methods). Expression matrix is then reconstructed from the low-rank or predicted latent spaces, which then cease to be insignificant. Seven low rank matrix-based algorithms were identified (Table ).

Low-Ranked Matrix-based Methods

Among these, Adaptive-threshold Low-Rank Approximation (ALRA) by Linderman and Kluger [33], is a scalable process for retrieval of single cell RNA-Seq expression. Selective imputation of technical zeroes is done through a non-negative and correlatingly structured expression matrix. Matrix is approximated via a singular vector decomposition method, followed by a thresholding. PBLR [34] employs incomplete or non-negative matrix factorization (NMF), to create a concurrent matrix. Cell-cell distances are calculated using Spearman, Pearson and Cosin metrics. Matrices are transformed to affinity matrices, with 20 rounds of NMF application on each matrix. Imputed matrices are merged to get a consolidated one and then fed as the input of hierarchical clustering. Optimization is done via Alternating Direction Method of Multipliers (ADMM) algorithm [35, 36] and submatrices/sub-populations are inferred. mcImpute [37] is a matrix completion focused workslow and imputes dropouts from single cell expression values through iterative thresholding. Raw reads are standardized by library size, sieved for expression, pseudo-count of one is added and Log2 transformed expression matrix is fed to Nuclear-norm minimization algorithm. The expression is recovered through convex optimization and distribution is not taken into account. Synthetic or planted drop-outs in the expression matrix can be retrieved through this approach. It can handle heterogeneous data. scRMD [38] utilizes matrix decomposition for imputation. Nominal assumptions (i.e. low-rankness and the sparsity) guided by random matrix theory are accounted for and scRMD can resolve dropouts with expression matric values of zero > 80%. scHinter [39] is tailored for imputation on limited sample size data. A ranked ensemble distance technique (with consensus distance from Euclidean, Manhattan, Cosine, Pearson, Spearman metrics) and synthetic minority oversampling method (SMOTE) for aleatory or hierarchal interpolation are utilized. Iteration or multi-layer random interpolation improves the accuracy of results. CMF-Impute [40] uses collaborative matrix factorization for imputation. Distance (Euclidean, Chebyshev) and correlation (Pearson’s correlation) matrices are used for finding cell-cell and gene-gene similarity. Two feature matrices are obtained from matrix decomposition algorithms and consistency is quantified. netNMF-sc [41] uses the network as well as transcript count information for making low dimensional cell and gene matrix. A network-regularized NMF is combined with a graph Laplacian for treating excess zeros in transcript count matrices having a dropout rate above 60%. Value for each entry is imputed rather than just considering values for null entries and it is adept to gather information from any gene-gene interaction network, instead of inferring parameters from a trained protein-protein interaction. A low-dimensional transcript count matrix is obtained that can be used for grouping discrete cells or imputing gene clusters with zero and non-zero values. It has been observed that cumulating representative networks, boosts performance of imputation algorithm.

Deep Learning Methods

In the case of deep-learning algorithms (e.g. ones employing variational autoencoders), the imputed data (i.e. reconstructed expression matrix) along with predicted latent space can be used for further analyses, but it is typical to only use imputed data for downstream processing. Nine algorithms employing deep learning methodology were identified from the literature (Table ). Among these, AutoImpute [42] applies a state-of-the-art deep learning technique and imputes expression using sparse gene expression matrix. A latent factor model based on over complete autoencoders (type of neural network) is employed. Autoencoder entails a coder (which inputs the value with sigmoid activation function) and decoder (which outputs expression), with values regularized to avoid overfitting. A decreased loss and insensitivity to the peripheral gene expression distribution is characteristic of this method. The network is trained by means of gradient descent with minimal cost. Iterations are carried out and convergent imputation values are obtained at the end. ScVI [43] is a scalable method utilizing probability to impute expression of drop outs. scVI amasses information across similar cells and genes via stochastic optimization coupled with deep neural networks. Distribution values behind the observed expression are approximated and expression imputation is inferred. Even though the initial objective of ScVI was not imputation but gene filtering (~ top 700 variable genes) also facilitated accurate imputation. It is computationally efficient but more suited to homogeneous datasets. DCA [44] is an abbreviation of deep count autoencoder. It is another workflow that uses neural networks to denoise single cell RNA-Seq data. DCA accounts for data sparsity, count distribution and overdispersion using a ZINB model. Non sequential gene-gene interactions are deduced and the process scales linearly with the quantity of cells with or without zeros inflation. DCA can handle heterogeneous datasets but a limitation is that it is computationally intensive. DeepImpute [45] is a scalable method which uses sub-neural networks with correlated genes as input layer. Specific target genes are not used as direct input, to reduce overfitting. A dense layer consisting of 256 neurons is the primary hidden layer followed by a dropout layer with 20% dropout rate for misfits. Output layer is composed of to-be imputed target genes and their subsets (default N = 512). This method is computationally efficient. Deconvolution using saliency maps [46] is a method which uses autoencoder neural networks to count single cell RNA-Seq expression. This method detects the expression signal with perturbed or zeroed out input. Four layers, with dimensions 128, 64 and 128 were used for training autoencoders. Two layers are specified for encoding and two for decoding. Xavier initialization for initial weighing is followed by Poisson negative log-likelihood loss function for training the neural network. Captured information is deconvoluted through saliency maps. SAUCIE/ Sparse Autoencoder for Unsupervised Clustering, Imputation, and Embedding [47] is a scalable technique based on different layers and extracts structure from single-cell RNA-Sequencing data. Autoencoder neural network for unsupervised learning is employed and latent layer assigns digital codes, clusters input, processes near-binary inactivated values using dimensionality reduction. Denoised data is regularized and outer layer yields encoded cluster identifications. scScope [48] is a scalable, deep learning method with a self-correcting capability. It obtains imputations for zeroed entries of single cell RNA-Seq data. Iterations are performed using multilayered neural networks for imputing zero-valued entries of input single cell RNA-Seq data. Phenograph (https://github.com/jacoblevine/ 
PhenoGraph) is used for subpopulation discovery. deepMc or deep Matrix completion [49] is grounded on deep matrix factorization and deep dictionary learning methods. It does not account for distribution for gene expression. Gene detected with more than 3 reads (in at least 3 cells) is considered expressed. Inferred matrices are normalized and 1000 genes having high-dispersion coefficient of variance are reserved for imputation followed by log2 transformation of expression data. LATE/TRANSLATE [50] is a parametric deep learning method for imputation. Arbitrary starting values of the parameters are used for training autoencoder (LATE algorithm) while extension of the method is TRANSLATE (TRANSfer learning with LATE), which utilizes gene expression (reference data set) for training autoencoder. Input is a sequencing read count data matrix (cell IDs=row names; gene IDs=column names) in .csv, .tsv or .h5 format. Output is in .hd5 format, with the same layout as input. These algorithms are extremely scalable (can process >1 million cells in a few hours) on a graphics processing unit.

PERSPECTIVE

For majority of transcripts, single cell RNA-Seq data often contains a large fraction of zero counts due to drop out events. The term “dropout” is often used to denote observed zero values in single cell RNA-Seq data. Dropout typically integrates two different types of zero values i.e. false and true. False one is due to methodological noise i.e. there is an expression of gene, which is undetectable by the sequencing technology because of insufficient depth and low capture rate. True drop outs are due to lack of gene expression [10]. The frequency of zero counts depends on which sequencing protocol has been used and also on the depth of sequencing. For example, Microfluidic single cell RNA-Seq technologies, like inDrops, Drop-Seq, and 10x Genomics Chromium platform have 90% dropout rate as these sequence thousands of the cells with low coverage (1K-200K reads/cell). Cell-capture technologies, like Fluidigm C1 has 20-40% dropout rate as it sequences hundreds of the cells with high coverage (1-2 million reads/cell) [41]. These zero counts or dropout events increase the complexity of single cell RNA-Seq data and hinder the accurate quantitative data analysis. In single cell RNA-Seq studies, it is, therefore crucial to impute the zero values in order to facilitate exact quantification of transcriptome at the single-cell level [18]. Since the first single cell imputation method presented in 2016, several methods/workflows have been developed for the purpose. In the text, we provide a short overview of different approaches for the imputation of single cell RNA-Seq data. We have categorized these methods into three categories, where the first category includes imputation methods that use probabilistic models to directly represent sparsity. Biological and technical zeroes may not be distinguished and usually only technical ones are accounted for, in the imputation function. Such methods produce less false-positives but this rests on data homogeneity or heterogeneity. Second category includes methods that smooth or adjust zero and non-zero values by averaging expression values or their diffusion. This approach is useful for reducing noise but many false positives may be generated. It is interesting to note that first category methods may outperform algorithms of second category, in datasets having genes with small effect size [51]. Third category entails data reconstruction, either through a low-ranked matrix-based method or deep learning neural network-based approach. Low-rank matrix-based methods capture linear while deep learning methods process non-linear relationships. Denser information matrix is obtained for downstream processing. Although sparsity and scalability have been resolved by numerous methods and benchmarking has revealed the algorithms suited to heterogenous and homogenous datasets, discrete expression inference has been the hallmark of all these algorithms but trajectory-based interpretation of imputation is suggested for the future. Most methods are computationally efficient, scalable and applicable to heterogeneous datasets. Circularity issue has been addressed in several algorithms, with random input instead of specified data values. Overimputation and overfitting have also been addressed by several methods, with better results. Users can implement a statistical method of choice, depending on their requirements. We also suggest that statistical tests applied to imputed data should be treated with care and filtering by effect size as well as testing with at least one algorithm from each category should be done to eliminate errors and reduce false-positives. Benchmarking of all these methods on small and large datasets of homogeneous and heterogeneous nature should also be attempted to make a better comparison.
Table 1

Features of methods employing model-based approach.

Serial No. Software/ Method OS Interface Programming Language Link
1BISCUITWindows, LinuxCommandlineR https://github.com/sandhya212/BISCUIT_SingleCell_IMM_ICML_2016
2SAVERWindows/LinuxCommandlineR https://github.com/mohuangx/SAVER
3SAVER-XWindows/Linux and web appCommandlineRhttps://github.com/jingshuw/SAVERX, https://singlecell.wharton.upenn.edu/saver-x/
4ScImputeWindows, Linux, web server GRANATUMCommandline as well as web applicationR, shiny for web serverhttps://github.com/Vivianstats/scImpute, http://garmiregroup.org/granatum/app
5scRecoverWindows, LinuxCommandlineR https://miaozhun.github.io/scRecover/
6scUnifWindows, LinuxCommandlinePython, R https://github.com/lingxuez/URSM
7VIPERWindows, LinuxCommandlineR https://github.com/ChenMengjie/VIPER
8scGAINWindows, LinuxCommandlinepython https://github.com/mgunady/scGAIN
9bayNormWindows/LinuxCommandlineR https://github.com/WT215/bayNorm
Table 2

Features of methods employing data smoothing approach.

Serial No. Algorithm/Method Interface OS Programming Language Link
1DrImputeCommand line, CellBenchWindows/LinuxR https://github.com/ikwak2/DrImpute
2MAGIC / Markov Affinity-based Graph Imputation of CellsCommand lineWindows/LinuxPython, Matlab, R https://github.com/KrishnaswamyLab/MAGIC
3KNN-smoothingCommand line, CellBenchWindows/LinuxPython, Matlab, R https://github.com/yanailab/knn-smoothing
4LSImputeWeb applicationWeb applicationJava script, Shiny http://cnv1.engr.uconn.edu:3838/LSImpute
52DimputeCommand lineWindows/LinuxRhttps://github.com/zky0708/2Dimpute
6scNPFCommand lineWindows/LinuxRhttps://github.com/BMILAB/scNPF.
7netImputeCommand lineLinuxPython http://www.cs.utsa.edu/~software/netImpute/
8G2S3Command lineWindows, LinuxMatlab, R https://github.com/ZWang-Lab/G2S3
Table 3

Features of methods employing low-ranked matrix-based approach.

Serial No. Algorithm/Method OS Interface Programming Language Link
1ALRAWindows, Linux, Seurat webserverCommandline and implemented in SeuratWebRhttps://github.com/nasqar/SeuratWizard/, http://nasqar.abudhabi.nyu.edu/SeuratWizard
2mcImputeWindows, LinuxCommandlineMatlab https://github.com/aanchalMongia/McImpute_scRNAseq
3PBLRWindows, LinuxCommandlineMatlab http://page.amss.ac.cn/shihua.zhang/software.html
4scRMDWindows, LinuxCommandlineR https://github.com/XiDsLab/scRMD
5scHinterWindows, LinuxCommandlineMatlab https://github.com/BMILAB/scHinter
6CMF-ImputeWindows, LinuxCommandlineMatlab https://github.com/xujunlin123/CMFImpute
7netNMF-scWindows, LinuxCommandlinePython https://github.com/raphael-group/netNMF-sc
Table 4

Features of methods employing deep learning approach.

Serial No. Algorithm/Method Interface Programming Language Link
1AutoImputeLinuxPython, Rhttps://github.com/divyanshu-talwar/AutoImpute
2ScVILinuxPython https://github.com/YosefLab/scVI
DCALinuxPythonhttps://github.com/theislab/dca
3DeepImputeLinuxPython https://github.com/lanagarmire/DeepImpute
4SAUCIELinuxPython https://github.com/KrishnaswamyLab/SAUCIE/
5scScopeLinuxPython https://github.com/AltschulerWu-Lab/scScope
7deepMcLinux, WindowsMatlab https://drive.google.com/drive/folders/1TMD8sjPXlpe5V-3EAi38aFHQoy1gXd6h
8Deconvolution through saliency mapsLinuxPython https://gitlab.com/cphgeno/expression_saliency
9LATE/TRANSLATELinuxPython https://github.com/audreyqyfu/LATE
  42 in total

1.  EnImpute: imputing dropout events in single-cell RNA-sequencing data via ensemble learning.

Authors:  Xiao-Fei Zhang; Le Ou-Yang; Shuo Yang; Xing-Ming Zhao; Xiaohua Hu; Hong Yan
Journal:  Bioinformatics       Date:  2019-11-01       Impact factor: 6.937

2.  The Human Cell Atlas: from vision to reality.

Authors:  Orit Rozenblatt-Rosen; Michael J T Stubbington; Aviv Regev; Sarah A Teichmann
Journal:  Nature       Date:  2017-10-18       Impact factor: 49.962

3.  scRMD: Imputation for single cell RNA-seq data via robust matrix decomposition.

Authors:  Chong Chen; Changjing Wu; Linjie Wu; Xiaochen Wang; Minghua Deng; Ruibin Xi
Journal:  Bioinformatics       Date:  2020-03-02       Impact factor: 6.937

4.  Dirichlet Process Mixture Model for Correcting Technical Variation in Single-Cell Gene Expression Data.

Authors:  Sandhya Prabhakaran; Elham Azizi; Ambrose Carr; Dana Pe'er
Journal:  JMLR Workshop Conf Proc       Date:  2016

5.  A UNIFIED STATISTICAL FRAMEWORK FOR SINGLE CELL AND BULK RNA SEQUENCING DATA.

Authors:  Lingxue Zhu; Jing Lei; Bernie Devlin; Kathryn Roeder
Journal:  Ann Appl Stat       Date:  2018-03-09       Impact factor: 2.083

6.  Recovering Gene Interactions from Single-Cell Data Using Data Diffusion.

Authors:  David van Dijk; Roshan Sharma; Juozas Nainys; Kristina Yim; Pooja Kathail; Ambrose J Carr; Cassandra Burdziak; Kevin R Moon; Christine L Chaffer; Diwakar Pattabiraman; Brian Bierie; Linas Mazutis; Guy Wolf; Smita Krishnaswamy; Dana Pe'er
Journal:  Cell       Date:  2018-06-28       Impact factor: 41.582

7.  VIPER: variability-preserving imputation for accurate gene expression recovery in single-cell RNA sequencing studies.

Authors:  Mengjie Chen; Xiang Zhou
Journal:  Genome Biol       Date:  2018-11-12       Impact factor: 13.583

8.  WGCNA: an R package for weighted correlation network analysis.

Authors:  Peter Langfelder; Steve Horvath
Journal:  BMC Bioinformatics       Date:  2008-12-29       Impact factor: 3.169

9.  DeepImpute: an accurate, fast, and scalable deep neural network method to impute single-cell RNA-seq data.

Authors:  Cédric Arisdakessian; Olivier Poirion; Breck Yunits; Xun Zhu; Lana X Garmire
Journal:  Genome Biol       Date:  2019-10-18       Impact factor: 13.583

10.  bayNorm: Bayesian gene expression recovery, imputation and normalization for single-cell RNA-sequencing data.

Authors:  Wenhao Tang; François Bertaux; Philipp Thomas; Claire Stefanelli; Malika Saint; Samuel Marguerat; Vahid Shahrezaei
Journal:  Bioinformatics       Date:  2020-02-15       Impact factor: 6.937

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.