Literature DB >> 24565287

Sparse representation approaches for the classification of high-dimensional biological data.

Yifeng Li, Alioune Ngom.   

Abstract

BACKGROUND: High-throughput genomic and proteomic data have important applications in medicine including prevention, diagnosis, treatment, and prognosis of diseases, and molecular biology, for example pathway identification. Many of such applications can be formulated to classification and dimension reduction problems in machine learning. There are computationally challenging issues with regards to accurately classifying such data, and which due to dimensionality, noise and redundancy, to name a few. The principle of sparse representation has been applied to analyzing high-dimensional biological data within the frameworks of clustering, classification, and dimension reduction approaches. However, the existing sparse representation methods are inefficient. The kernel extensions are not well addressed either. Moreover, the sparse representation techniques have not been comprehensively studied yet in bioinformatics.
RESULTS: In this paper, a Bayesian treatment is presented on sparse representations. Various sparse coding and dictionary learning models are discussed. We propose fast parallel active-set optimization algorithm for each model. Kernel versions are devised based on their dimension-free property. These models are applied for classifying high-dimensional biological data.
CONCLUSIONS: In our experiment, we compared our models with other methods on both accuracy and computing time. It is shown that our models can achieve satisfactory accuracy, and their performance are very efficient.

Entities:  

Mesh:

Year:  2013        PMID: 24565287      PMCID: PMC3854665          DOI: 10.1186/1752-0509-7-S4-S6

Source DB:  PubMed          Journal:  BMC Syst Biol        ISSN: 1752-0509


Introduction

The studies in biology and medicine have been revolutionarily changed since the invents of many high-throughput sensory techniques. Using these techniques, the molecular phenomenons can be probed with a high resolution. In the virtue of such techniques, we are able to conduct systematic genome-wide analysis. In the last decade, many important results have been achieved by analyzing the high-throughput data, such as microarray gene expression profiles, gene copy numbers profiles, proteomic mass spectrometry data, next-generation sequences, and so on. On one hand, biologists are enjoining the richness of their data; one another hand, bioinformaticians are being challenged by the issues of the high-dimensional data. Many of the analysis can be formulated into machine learning tasks. First of all, we have to face to the cures of high dimensionality which means that many machine learning models can be overfitted and therefore have poor capability of generalization. Second, if the learning of a model is sensitive to the dimensionality, the learning procedure could be extremely slow. Third, many of the data are very noise, therefore the robustness of a model is necessary. Forth, the high-throughput data exhibit a large variability and redundancy, which make the mining of useful knowledge difficult. Moreover, the observed data usually do not tell us the key points of the story. We need to discover and interpret the latent factors which drive the observed data. Many of such analysis are classification problem from the machine learning viewpoint. Therefore in this paper, we focus our study on the classification techniques for high-dimensional biological data. The machine learning techniques addressing the challenges above can be categorized into two classes. The first one aims to directly classify the high-dimensional data while keeping the good capability of generalization and efficiency in optimization. The most popular method in this class is the regularized basis-expended linear model. One example is the state-of-the-art support vector machine (SVM) [1]. SVM can be kernelized and its result is theoretically sound. Combining different regularization terms and various loss functions, we can have many variants of such linear models [2]. In addition to classification, some of the models can be applied to regression and feature (biomarker) identification. However, most of the learned linear models are not interpretable, while interpretability is usually the requirement of biological data analysis. Moreover, linear models can not be extended naturally to multi-class data, while in bioinformatics a class may be composed of many subtypes. Another technique of tackling with the challenges is dimension reduction which includes feature extraction and feature selection. Principal component analysis (PCA) [3] is the oldest feature extraction method and is widely used in processing high-dimensional biological data. The basis vectors produced by PCA are orthogonal, however many patterns in bioinformatics are not orthogonal at all. The classic factor analysis (FA) [4] also has orthogonal constraints on the basis vectors, however its Bayesian treatment does not necessarily produce orthogonal basis vectors. Bayesian factor analysis will be introduced in the next section. Sparse representation (SR) [5] is a parsimonious principle that a sample can be approximated by a sparse linear combination of basis vectors. Non-orthogonal basis vectors can be learned by SR, and the basis vectors may be allowed to be redundant. SR highlights the parsimony in representation learning [6]. This simple principle has many strengthes that encourage us to explore its usefulness in bioinformatics. First, it is very robust to redundancy, because it only select few among all of the basis vectors. Second, it is very robust to noise [7]. Furthermore, its basis vectors are non-orthogonal, and sometimes are interpretable due to its sparseness [8]. There are two techniques in SR. First, given a basis matrix, learning the sparse coefficient of a new sample is called sparse coding. Second, given training data, learning the basis vector is called dictionary learning. As dictionary learning is, in essence, a sparse matrix factorization technique, non-negative matrix factorization (NMF) [9] can be viewed a specific case of SR. For understanding sparse representation better, we will give the formal mathematical formulation from a Bayesian perspective in the next section. This paper is the significant extension of our preliminary work presented in [10] where sparse representation is treated from regularization and optimization perspectives. In this paper, we formulate sparse representation from a Bayesian viewpoint. We show that using different prior distributions, we can obtain various sparse coding and dictionary learning models. Although there exists some works, for example [11], which apply sparse coding in the classification of biological data, to the best of our knowledge, this is the first time that sparse representation is intensively and systematically studied in the area of bioinformatics. This study has the following contributions: 1. We give a Bayesian treatment on the sparse representation, which is very helpful to understand and design sparse representation models. 2. Kernel sparse coding techniques are proposed for direct classification of high-dimensional biological data. 3. Fast parallel active-set algorithms are devised for sparse coding. 4. An efficient generic framework of kernel dictionary learning for feature extraction is proposed. 5. We reveal that the optimization and decision making in sparse representation is dimension-free. We organize the rest of this paper as follow. We first introduce factor analysis and sparse representation from a Bayesian aspect. Classification method based on sparse coding is then introduced and the active-set methods are proposed for the optimization. Their kernel extensions are given as well. Then various dictionary learning models are given. After that, a generic optimization framework is devised to optimize these models. In the same section, dictionary-learning-based classification and its kernel extension are proposed as well. Then we describe our computational experiments on two high-dimensional data sets. Finally, conclusions and future works are drawn.

Related work from a Bayesian viewpoint

Both (sparse) factor analysis and sparse representation models can be used as dimension reduction techniques. Due to their intuitive similarity, it is necessary to give their definitions for comparison. In this section, we briefly survey the sparse factor analysis and sparse representation in a Bayesian viewpoint. The introduction of sparse representation is helpful to understand the content of the subsequent sections. Hereafter, we use the following notations unless otherwise stated. Suppose the training data is (m is the number of features and n is the number of training instances (samples or observations)), the class information is in the n-dimensional vector . Suppose p new instances are in .

Sparse Bayesian (latent) factor analysis

The advantages of Bayesian (latent) factor analysis model [12] over likelihood (latent) factor analysis model are that 1. The knowledge regarding the model parameters from experts and previous investigations can be incorporated through the prior. 2. The values of parameters are refined using the current training observations. The Bayesian factor analysis model [12] can be formulated as where is an observed multivariate variable, is the population mean, is latent factor loading matrix, and is latent factor score (k ≪ m), and is an idiosyncratic error term. This model is restricted by the following constraints or assumptions: 1. The error term is normally distributed with mean 0 and covariance Φ: ~ N (0, Φ). Φ is diagonal on average. 2. The factor score vector is also normally distributed with mean 0 and identity covariance = : ~ N (0, ); and the factor loading vector is normally distributed: ~ N (0, Δ) where Δ is diagonal. Alternatively, the factor loading vectors can be normally distributed with mean 0 and identity covariance Δ = ; and the factor score vector is normally distributed with mean 0 and diagonal covariance . The benefit of identity covariance either on or is that arbitrary scale interchange between and due to scale invariance can be avoided. 3. is independent of . For n training instances , we have the likelihood: where tr() is the trace of square matrix . The variants of Bayesian factor analysis models differ in the decomposition of the joint priors. The simplest one may be p(, ) = p()p()p(). Suppose k is fixed a priori. The posterior therefore becomes The model parameters are usually estimated via Markov chain Monte Carlo (MCMC) techniques. Sparse Bayesian factor analysis model imposes a sparsity-inducing distribution over the factor loading matrix instead of Gaussian distribution. In [13], the following mixture of prior is proposed: where πis the probability of a nonzero aand δ0(·) is the Dirac delta function at 0. Meanwhile, is constrained using the lower triangular method. Bayesian factor regression model (BFRM) is the combination of Bayesian factor analysis and Bayesian regression [13]. It has been applied in oncogenic pathway studies [4] as a variable selection method.

Sparse representation

Sparse representation (SR) is a principle that a signal can be approximated by a sparse linear combination of dictionary atoms [14]. The SR model can be formulated as where is called dictionary, is a dictionary atom, is a sparse coefficient vector, and is an error term. , , and k are the model parameters. SR model has the following constraints: 1. The error term is Gaussian distributed with mean zero and isotropic covariance, that is ~ N (0, Φ) where Φ = ϕwhere ϕ is a positive scalar. 2. The dictionary atoms is usually Gaussian distributed, that is i ~ N (0, Δ) where Δ = . The coefficient vector should follows a sparsity-inducing distribution. 3. is independent of . Through comparing the concepts of Bayesian factor analysis and Bayesian sparse representation, we can find that the main difference between them is that the former applies a sparsity-inducing distribution over the factor loading matrix, while the later uses a sparsity-inducing distribution on the factor score vector. Sparse representation involves sparse coding and dictionary learning. Given a new signal and a dictionary , learning the sparse coefficient is termed sparse coding. It can be statistically formulated as Suppose the coefficient vector has Laplacian prior with zero mean and isotropic variance, that is . The likelihood is Gaussian distributed as . The posterior is thus Taking the logarithm, the above is thus where c is a constant term. We can see that maximizing the posterior is equivalent to minimizing the following task: where . Hereafter we call Equation (10) l1-least squares (l1 LS) sparse coding model. It is known as the l1-regularized regression model in regularization theory. It coincides with the well-known LASSO model [15], which in fact is a maximum a posteriori (MAP) estimation. Given training data , learning (or estimating) the dictionary , the coefficient vectors , and the number of dictionary atoms k is called dictionary learning. Suppose k is given a priori, and consider the Laplacian prior over and the Gaussian prior over , and suppose . We thus have the prior: The likelihood is The posterior is Ignoring the normalization term (that is the marginal likelihood), the log-posterior is thus Therefore the MAP estimation of dictionary learning task is where α = f and . Equation (16) is known as a dictionary learning model based on l1-regularized least squares. In the literature, the kernel extension of sparse representation is realized in two ways. The first way is to use empirical kernel in sparse coding as in [16], where dictionary learning is not considered. The second way is the one proposed in [17], where dictionary learning is involved. However, the dictionary atoms are represented and updated explicitly. This could be intractable, as the number of dimensions of dictionary atoms in the feature space is very high even infinite. In the later sections, we give our kernel extensions of sparse coding and dictionary learning, respectively, where any kernel functions can be used and the dictionary is updated efficiently.

Sparse coding methods

Since, the l1LS sparse coding (Equation (10)) is a two-sided symmetric model, thus a coefficient can be zero, positive, or negative [18]. In Bioinformatics, l1LS sparse coding has been applied for the classification of microarray gene expression data in [11]. The main idea is in the following. First, training instances are collected in a dictionary. Then, a new instance is regressed by l1LS sparse coding. Thus its corresponding sparse coefficient vector is obtained. Next, the regression residual of this instance to each class is computed, and finally this instance is assigned to the class with the minimum residual. We generalize this methodology in the way that the sparse code can be obtained by many other regularization and constraints. For example, we can pool all training instances in a dictionary (hence k = n and = ), and then learn the non-negative coefficient vectors of a new instance, which is formulated as an one-sided model: We called this model the non-negative least squares (NNLS) sparse coding. NNLS has two advantages over l1LS. First, the non-negative coefficient vector is more easily interpretable than coefficient vector of mixed signs, under some circumstances. Second, NNLS is a non-parametric model. From a Bayesian viewpoint, Equation (17) is equivalent to the MAP estimation with the same Gaussian error as in Equation (6), but the following discrete prior: This non-negative prior implies that, the elements in are independent, and the probability of x= 0 is 0.5 and the probability of x0 is 0.5 as well. (That is the probabilities of xbeing either 0 or positive are equal, and the probability of being negative is zero.) Inspired by many sparse NMFs, l1-regularization can be additionally used to produce more sparse coefficients than NNLS above. The combination of l1-regularization and non-negativity results in the l1NNLS sparse coding model as formulated below: We call Equation (19) the l1NNLS model. It is more flexible than NNLS, because it can produce more sparse coefficients as controlled by λ. This model in fact uses the following prior: Now, we give the generalized sparse-coding-based classification approach in details. The method is depicted in Algorithm 1. We shall give the optimization algorithms, later, required in the first step. The NN rule mentioned in Algorithm 1 is inspired by the usual way of using NMF as a clustering method. Suppose there are C classes with labels 1, ⋯, C. For a given new instance , its class is l = arg maxx. It selects the maximum coefficient in the coefficient vector, and then assigns the class label of the corresponding training instance to this new instance. Essentially, this rule is equivalent to applying nearest neighbor (NN) classifier in the column space of the training instances. In this space, the representations of the training instances are identity matrix. The NN rule can be further generalized to the weighted K-NN rule. Suppose a K-length vector accommodates the K-largest coefficients from x, and has the corresponding K class labels. The class label of can be designated as l = arg maxsi where . is a K-length vector and is defined as The maximum value of K can be k, the number of dictionary atoms. In this case, K is in fact the number of all non-zeros in . Alternatively, the nearest subspace (NS) rule, proposed in [19], can be used to interpret the sparse coding. NS rule takes the advantage of the discrimination of property in the sparse coefficients. It assigns the class with the minimum regression residual to . Mathematically, it is expressed as j = min1r() where r() is the regression residual corresponding to the i-th class and is computed as , where () is defined analogically as in Equation (21). Algorithm 1 Sparse-coding-based classification Input: : n training instances, : class labels, : p new instances Output: : predicted class labels of the p new instances 1. Normalize each instance to have unit l2-norm. 2. Learn the sparse coefficient matrix , of the new instances by solving Equation (10), (17), or (19). 3. Use a sparse interpreter to predict the class labels of new instances, e.g. the NN, K-NN, or NS rule.

Optimization

Active-set algorithm for l1 LS

The problem in Equation (10) is equivalent to the following non-smooth unconstrained quadratic programming (QP): where = T, and = -T. We thus know that the l1LS problem is a l1QP problem. This can be converted to the following smooth constrained QP problem: where is an auxiliary vector variable to squeeze towards zero. It can be further written into the standard form: where is an identity matrix. Obviously, the Hessian in this problem is positive semi-definite as we always suppose is positive semi-definite in this paper. A general active-set algorithm for constrained QP is provided in [20], where the main idea is that a working set is updated iteratively until it meets the true active set. In each iteration, a new solution t to the QP constrained only by the current working set is obtained. If the update step t = t - xis zero, then Lagrangian multipliers of the current active inequalities are computed. If all these multipliers corresponding to the working set are non-negative, then the algorithm terminates with an optimal solution. Otherwise, an active inequality is dropped from the current working set. If the update step t is nonzero, then an update length α is computed using the inequality of the current passive set. The new solution is updated as = + α. If α <1, then a blocking inequality is added to the working set. To solve our specific problem efficiently in Equation (24), we have to modify the general method, because i) our constraint is sparse, for the i-th constraint, we have x0 (if i ≤ k) or -x0 (if i > k); and ii) when uis not constrained in the current working set, the QP constrained by the working set is unbounded, therefore it is not necessary to solve this problem to obtain . In the later situation, t is unbounded. This could cause some issues in numerical computation. Solving the unbounded problem is time-consuming if the algorithm is unaware of the unbounded issue. If contains positive or negative ∞, then the algorithm may crash. We propose the revised active-set algorithm in Algorithm 2 for l1LS sparse coding. To address the potential issues above, we have the following four modifications. First, we require that the working set is complete. That is all the variables in must be constrained when computing the current update step. (And therefore all variables in are also constrained due to the specific structure of the constraints in our problem.) For example, if k = 3, a working set {1, 2, 6} is complete as all variables, x1, x2, x3, u1, u2, u3, are constrained, while {1, 2, 4} is not complete, as u3 (and x3) is not constrained. Second, the update step of the variables that are constrained once in the working set are computed by solving the equality constrained QP. The variables constrained twice are directly set to zeros. In the example above, suppose the current working set is {1, 2, 4, 6}, then x2, x3, u2, u3 are computed by the constrained QP, while x1 and u1 are zeros. This is because the only value satisfying the constraint -u1 = x1 = u1 is x1 = u1 = 0. Third, in this example, we do not need to solve the equality constrained QP with four variables. In fact we only need two variables by setting u2 = -x2 and u3 = x3. Forth, once a constraint is dropped from the working set and it becomes incomplete, other inequalities must be immediately added to it until it is complete. In the initialization of Algorithm 2, we can alternatively initialize by 0's. This is much efficient than = ()-1(-) for large-scale sparse coding and very sparse problems.

Active-set algorithm for NNLS and lNNLS

Both the NNLS problem in Equation (17) and the l1NNLS problem in Equation (19) can be easily reformulated to the following non-negative QP (NNQP) problem: Algorithm 2 Active-set l1QP algorithm Input: Hessian , vector , scalar λ Output: vector which is a solution to min % initialize the algorithm by a feasible solution and complete working set x = ()-1(-); = |; ; % initialize working set ; % initialize inactive(passive) set while true do % compute update step let be the indices of variables constrained once by ; 2= 0; where e= 1 if , or -1 if ; if = 0 then obtain Lagrange multiplier by solving where is the constraint matrix in Equation (24) if then terminate successfully; else where ; add other passive constraints to until it is complete; end if end if if ≠ 0 then [; ] = [; ] + α; if α <1 then ; . where i corresponds to α; end if end if end while where = T, = -Tfor NNLS, and = -T+ for l1NNLS. Now, we present the active-set algorithm for NNQP. This problem is easier to solve than l1QP as the scale of Hessian of NNQP is half that of l1QP and the constraint is much simpler. Our algorithm is obtained through generalizing the famous active-set algorithm for NNLS by [21]. The complete algorithm is given in Algorithm 3. The warm-start point is initialized by the solution to the unconstrained QP. As in Algorithm 2, can be alternatively initialized by 0's. The algorithm keeps adding and dropping constraints in the working set until the true active set is found. Algorithm 3 Active-set NNQP algorithm Input: Hessian , vector Output: vector which is a solution to x = [()-1(-)]+; % = []+is defined as x= y0, otherwise x= 0 ; % initialize active set ; % initialize inactive(passive) set = + ; % the lagrange multiplier while ¹ Æ and min(µ) < -e do % e is a small positive numerical tolerance j = arg min(µ); % get the minimal negative multiplier while min do ; % there is one or several indices correspond to α x = x + α(t - x); end while x = t; µ = Hx + g; end while

Parallel active-set algorithms

The formulations of l1QP and NNQP sparse coding for p new instances are, respectively, and If we want to classify multiple new instances, the initial idea in [19] and [11] is to optimize the sparse coding one at a time. The interior-point algorithm, proposed in [22], is a fast large-scale sparse coding algorithm, and the proximal algorithm in [23] is a fast first-order method whose advantages have been recently highlighted for non-smooth problems. If we adapt both algorithms to solve our multiple l1QP in Equation (27) and NNQP in Equation (28), it will be difficult to solve the single problems in parallel and share computations. Therefore, the time-complexity of the multiple problems will be the summation of that of the individual problems. However, the multiple problems can be much more efficiently solved by active-set algorithms. We adapt both Algorithms 2 and 3 to solve multiple l1QP and NNQP in a parallel fashion. The individual active-set algorithms can be solved in parallel by sharing the computation of matrix inverses (systems of linear equations in essence). At each iteration, single problems having the same active set have the same systems of linear equations to solve. These systems of linear equations can be solved once only. For a large value p, that is large-scale multiple problems, active-set algorithms have dramatic computational advantage over interior-point [22] and proximal [23] methods unless these methods have a scheme of sharing computations. Additionally, active-set methods are more precise than interior-point methods. Interior-point methods do not allow and must be always greater than due to feasibility. But is naturally possible when the i-th constraint is active. u= x= 0 is reasonable and possible. Active-set algorithms do allow this situation.

Kernel extensions

As the optimizations of l1QP and NNQP only require inner products between the instances instead of the original data, our active-set algorithms can be naturally extended to solve the kernel sparse coding problem by replacing inner products with kernel matrices. The NS decision rule used in Algorithm 1 also requires only inner products. And the weighted K-NN rule only needs the sparse coefficient vector and class information. Therefore, the classification approach in Algorithm 1 can be extended to kernel version. For narrative convenience, we also denote the classification approaches using l1LS, NNLS, and l1NNLS sparse coding as l1LS, NNLS, and l1NNLS, respectively. Prefix "K" is used for kernel versions.

Dictionary learning methods

We pursue our dictionary-learning-based approach for biological data, based on the following two motivations. First, since sparse-coding-only approach is a lazy learning, the optimization can be slow for large training set. Therefore, learning a concise dictionary is more efficient for future real-time applications. Second, dictionary learning may capture hidden key factors which correspond to biological pathways, and the classification performance may hence be improved. In the following, we first give the dictionary learning models using Gaussian prior and uniform prior, respectively. Next, we give the classification method based on dictionary learning. We then address the generic optimization framework of dictionary learning. Finally, we show that the kernel versions of our dictionary learning models and the classification approach can be easily obtained.

Dictionary learning models

Now we give our dictionary learning models using Gaussian prior and uniform prior over the dictionary atoms, respectively. Both priors aims to get rid off the arbitrary scale interchange between dictionary and coefficient. Suppose is the data of n training instances, and the dictionary to be learned has k atoms. If the Gaussian prior in Equation (6) is used on the dictionary atom, our dictionary learning models of l1LS, NNLS, and l1NNLS are expressed as follow, respectively: and The strength of the Gaussian prior based dictionary learning is that it is flexible to control the scales of dictionary atoms. However, it has two model parameters, which increase the model selection burden in practice. Alternatively, in order to eliminate the parameter α, we design an uniform prior over the dictionary which is expressed as where p is a constant. That is the feasible region of the dictionary atoms is a hypersphere centered at origin with unit radius, and all the feasible atoms have equal probability. The corresponding dictionary learning models are given in the following equations, respectively: and

A generic optimization framework for dictionary learning

We devise block-coordinate-descent-based algorithms for the optimization of the above six models. The main idea is that, in the next step, is fixed, and the inner product T, rather than itself, is updated; in the next step, is updated while fixing T(a sparse coding procedure). The above procedure is repeated until the termination conditions are satisfied. Now, we show that can be analytically obtained. For normal prior over dictionary atoms, the optimization of finding in Equations (29), (30), and (31) is to solve Taking the derivative with respect to and setting it to zero, we have We hence have where ‡ = T(T + α)-1. The inner product Tcan thus be updated by We also can compute Tby For the uniform prior as in Equation (32), updating unnormalized while fixing in Equations (33), (34), and (35) is to solve the generalized least squares: Taking derivative with respect to and setting it to zero, we have Algorithm 4 The generic dictionary learning framework Input: = T, dictionary size k, λ Output: = T, nitialize and = Trandomly; r= Inf ; % previous residual for i = 1 : maxIter do update by solving the active-set based l1LS, NNLS, or l1NNLS sparse coding algorithms; if Gassian prior over update = ‡TT‡; end if if uniform prior over update = †TT†; normalize by ; end if if i == maxIter or i mod l = 0 then % check every l iterations r= f (, ); % current residual of a dictionary learning model if ror rthen break; end if r= r; end if end for where † = T(T)-1. The inner products of = Tand Tare computed similarly as for the Gaussian prior. The normalization of is straightforward. We have , where ./ and are element-wise operators. Learning the inner product Tinstead of has the benefits of dimension-free computation and kernelization. Fixing , can be obtained via our active-set algorithms. Recall that the sparse coding only requires the inner products Tand T. As shown above, we find that updating only needs its previous value and the inner product between training instances. Due to the above derivation, we have the framework of solving our dictionary learning models as illustrated in Algorithm 4.

Classification approach based on dictionary learning

Now, we present the dictionary-learning-based classification approach in Algorithm 5. The dictionary learning in the training step should be consistent with the sparse coding in the prediction step. As discussed in the previous section, the sparse coding in the prediction step needs the inner products T, Tand Twhich actually is ‡TTor ‡TT. Algorithm 5 Dictionary-learning-based classification Input: : n training instances, the class labels, : p new instances, k: dictionary size Output: : the predicted class labels of the p new instances {training step:} 1: Normalize each training instance to have unit l2 norm. 2: Learn dictionary inner product Tand sparse coefficient matrix of training instances by Algorithm 4. 3: Train a classifier f (θ) using (in the feature space spanned by columns of ). {prediction step:} 1: Normalize each new instance to have unit l2 norm. 2: Obtain the sparse coefficient matrix of the new instances by solving Equation (27), or (28). 3: Predict the class labels of using the classifier f (θ) learned in the training phase. For Gaussian dictionary prior, the l1LS based kernel dictionary learning and sparse coding are expressed in the following, respectively: where f (·) is a mapping function. Equations (30), (31), (33), (34), (35) and their sparse coding models can be kernelized analogically. As we have mentioned that the optimizations of the six dictionary learning models, only involves inner products of instances. Thus, we can easily obtain their kernel extensions by replacing the inner products with kernel matrices. Hereafter, if dictionary learning is employed in sparse representation, then prefix "DL" is used before "l1LS", "NNLS", and "l1NNLS". If kernel function other than the linear kernel is used in dictionary learning, then prefix "KDL" is added before them.

Computational experiments

Two high-throughput biological data, including a microarray gene expression data set and a protein mass spectrometry data set, are used to test the performance of our methods. The microarray data set is a collection of gene expression profiles of breast cancer subtypes [24]. This data set includes 158 tumor samples from five subtypes measured on 13582 genes. The mass spectrometry data set is composed of 332 samples from normal class and prostate cancer class [25]. Each sample has 15154 features, that is the mass-to-charge ratios. Our experiments are separated into two parts. The performance of sparse coding for direct classification is first investigated with respect to accuracy and running time. Then our dimension reduction techniques using dictionary learning are tested.

Sparse coding for direct classification

When dictionary learning was not involved, the dictionary was "lazily" composed by all the training instances available. In our experiment, the active-set optimization methods for l1LS, NNLS, and l1NNLS were tested. The weighted K-NN rule and NS rule, mentioned in Algorithm 1, were compared. We set K in the K-NN rule to the number of all training instances, which is an extreme case as opposite to the NN rule. Linear and radial basis function (RBF) kernels were employed. We compared our active-set algorithms with the interior-point [22] method and proximal [23] method for l1LS sparse coding (abbreviated by l1LS-IP and l1LS-PX). Benchmark classifiers, including k-NN and SVM using RBF kernel, were compared. We employed four-fold cross-validation to partition a data set into training sets and test sets. All the classifiers ran on the same training and test splits for fair comparison. We performed 20 runs of cross-validation and recorded the averages and standard deviations. Line or grid search was used to select the parameters of a classifiers. The average accuracies of all classifiers with the corresponding standard deviations on both data sets are compared in Figure 1, from which we have four observations. First, the weighted K-NN rule obtained comparable accuracies with the NS rule. The advantage of the K-NN rule over the NS rule is that the former predicts the class labels based on the sparse coefficient vector solely, while the later has to use the training data to compute regression residuals. Therefore the K-NN rule is more efficient and should be preferred. Second, on the Prostate data, the sparse coding method l1LS and Kl1LS achieved the best accuracy. This convinces us that sparse coding based classifiers can be very effective for classifying high-throughput biological data. Third, the non-negative models including NNLS, l1NNLS and their kernel extensions achieved competitive accuracies with the state-of-the-art SVM on both data set. Fourth, the l1LS sparse coding using our active-set algorithm had the same accuracies as that using the interior-point algorithm and proximal algorithm on Breast data. But on Prostate data, the proximal method yielded a worse accuracy. This implies that our active-set method converges to the global minima as the interior-point method, while performance may be deteriorated by the approximate solution obtained by the proximal method in practice.
Figure 1

Mean accuracies and standard deviations of sparse coding and benchmark methods.

Mean accuracies and standard deviations of sparse coding and benchmark methods. The mean running time (in second) of cross-validation are shown in Figure 2. For better comparison, logarithm of base two was taken on the results. First of all, we can clearly see that the interior-point method is very slow for the l1LS sparse coding. Second, our active-set method is more efficient than the proximal method on Breast data. This is because i) active-set methods are usually the fastest ones for quadratic and linear programmes of small and median size; and ii) expensive computations, like solving systems of linear equations, can be shared in the active-set method. Third, NNLS and l1NNLS have the same time-complexity. This is reasonable, because both can be formulated to NNQP problem. These non-negative models are much simpler and faster than the non-smooth l1LS model. Hence, if similar performance can be obtained by l1LS and the non-negative models in an application, we should give preference to NNLS and l1NNLS.
Figure 2

Log2 computing time of sparse coding and benchmark methods.

Log2 computing time of sparse coding and benchmark methods.

Dictionary-learning for feature extraction

The performance of various dictionary learning models with linear and RBF kernels were investigated on both Breast and Prostate data sets. The Gaussian-prior based and uniform-prior based dictionary learning models were also compared. Again, our active-set dictionary learning was compared with the interior-point [22] method and proximal method [23]. The semi-NMF based on multiplicative update rules [26] is also included in the competition. As in the previous experiment, four-fold cross-validation was used. All methods ran on the same splits of training and test sets. We performed 20 runs of cross-validation for reliable comparison. After feature extraction by using dictionary learning on the training set, linear SVM classifier was learned on the reduced training set and used to predict the class labels of test instances. In Figure 3, we show the mean accuracy and standard deviation of 20 results for each method. First, we can see that the models with Gaussian prior on dictionary atoms obtained similar accuracies as the uniform prior. Second, with the comparison to sparse coding methods on Breast data as given Figure 1, we can see that dictionary learning increases the prediction accuracy. Third, from the comparison of Figures 3 and 1, we find that the dictionary learning based methods - DL-NNLS and DL-l1NNLS, obtained similar accuracies as the sparse coding methods - NNLS and l1NNLS. This convinces us that dictionary learning is a promising feature extraction technique for high-dimensional biological data. On Prostate data, we can also find that the accuracy obtained by DL-l1LS is slightly lower than l1LS. This is may be because the dictionary learning is unsupervised. Fourth, using the model parameters, DL-l1LS using active-set algorithm obtained higher accuracy than DL-l1LS-IP and DL-l1LS-PX on Prostate data. The accuracy of DL-l1LS is also slightly higher than that of DL-l1LS-IP on Breast data. Furthermore, the non-negative DL-NNLS yielded the same performance as the well-known semi-NMF, while further corroborates the satisfactory performance of our dictionary learning framework. Finally, the kernel dictionary learning models achieved similar performance as their linear counterparts. We believe that the accuracy could be further improved by a suitably selected kernel.
Figure 3

Mean accuracies and standard deviations of dictionary learning methods.

Mean accuracies and standard deviations of dictionary learning methods. We compare the mean computing time of all the feature extraction methods in Figure 4. First, we can see that DL-l1LS using active-set algorithm is much more efficient than DL-l1LS-IP, DL-l1LS-PX, and semi-NMF using multiplicative update rules. Second, the non-negative dictionary learning models are more efficient than the l1-regularized models. Therefore as in the sparse coding method, priority should be given to the non-negative models when attempting to use dictionary learning in an application.
Figure 4

Log2 computing time of dictionary learning method.

Log2 computing time of dictionary learning method.

Conclusions

In this study, l1-regularized and non-negative sparse representation models are comprehensively studied for the classification of high-dimensional biological data. We give a Bayesian treatment to the models. We prove that the sparse coding and dictionary learning models are in fact equivalent to MAP estimations. We use different priors on the sparse coefficient vector and the dictionary atoms, which lead to various sparse representation models. We propose parallel active-set algorithms to optimize the sparse coding models, and propose a generic framework for dictionary learning. We reveal that the sparse representation models only use inner products of instances. Using this dimension-free property, we can easily extend these models to kernel versions. With the comparison with existing models for high-dimensional data, it is shown that our techniques are very efficient. Furthermore, our approaches obtained comparable or higher accuracies. In order to promote the research of sparse representation in bioinformatics, the MATLAB implementation of the sparse representation methods discussed in this paper can be downloaded at [27]. Our Bayesian treatment may inspire the readers to try other prior distributions in order to design new sparse representation models. It also helps to discover the similarity and difference between sparse representation and other dimension reduction techniques. Our kernel versions can also be used to classify tensor data where an observation is not a vector but a matrix or tensor [28]. They can also be applied in the biomedical text mining and interaction/relational data where only the similarities between instances are known. We will apply our technique to other high-throughput data, such as microarray epigenomic data, gene copy number profiles, and sequence data. We will impose both sparse-inducing prior on dictionary atoms and coefficients. Inspired by Bayesian factor analysis, we will investigate the variable selection methods using sparse dictionary. The sparse dictionary analysis would help us to uncover the biological patterns hidden in the high-dimensional biological data. Furthermore, combining Bayesian sparse representation and Bayesian regression leads to Bayesian sparse representation regression model, which is very helpful for designing supervised dictionary learning. Finally we should mention that we are working on a decomposition method for sparse coding which is efficient on large-scale biological data where there are at least thousands of samples.

Competing interests

The authors declare that they have no competing interests.

Authors' contributions

YL proposed the original idea, did literature survey, implemented the methods, conducted the experiments, and wrote the first draft of this paper. AN supervised the whole research, gave constructive suggestions, and finished the final version of this paper.
  11 in total

1.  Support vector machine classification and validation of cancer tissue samples using microarray expression data.

Authors:  T S Furey; N Cristianini; N Duffy; D W Bednarski; M Schummer; D Haussler
Journal:  Bioinformatics       Date:  2000-10       Impact factor: 6.937

2.  Learning the parts of objects by non-negative matrix factorization.

Authors:  D D Lee; H S Seung
Journal:  Nature       Date:  1999-10-21       Impact factor: 49.962

Review 3.  Representation learning: a review and new perspectives.

Authors:  Yoshua Bengio; Aaron Courville; Pascal Vincent
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2013-08       Impact factor: 6.226

4.  Convex and semi-nonnegative matrix factorizations.

Authors:  Chris Ding; Tao Li; Michael I Jordan
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2010-01       Impact factor: 6.226

5.  Sparse coding with an overcomplete basis set: a strategy employed by V1?

Authors:  B A Olshausen; D J Field
Journal:  Vision Res       Date:  1997-12       Impact factor: 1.886

6.  Serum proteomic patterns for detection of prostate cancer.

Authors:  Emanuel F Petricoin; David K Ornstein; Cloud P Paweletz; Ali Ardekani; Paul S Hackett; Ben A Hitt; Alfredo Velassco; Christian Trucco; Laura Wiegand; Kamillah Wood; Charles B Simone; Peter J Levine; W Marston Linehan; Michael R Emmert-Buck; Seth M Steinberg; Elise C Kohn; Lance A Liotta
Journal:  J Natl Cancer Inst       Date:  2002-10-16       Impact factor: 13.506

7.  High-Dimensional Sparse Factor Modeling: Applications in Gene Expression Genomics.

Authors:  Carlos M Carvalho; Jeffrey Chang; Joseph E Lucas; Joseph R Nevins; Quanli Wang; Mike West
Journal:  J Am Stat Assoc       Date:  2008-12-01       Impact factor: 5.033

8.  Robust face recognition via sparse representation.

Authors:  John Wright; Allen Y Yang; Arvind Ganesh; S Shankar Sastry; Yi Ma
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2009-02       Impact factor: 6.226

9.  The molecular portraits of breast tumors are conserved across microarray platforms.

Authors:  Zhiyuan Hu; Cheng Fan; Daniel S Oh; J S Marron; Xiaping He; Bahjat F Qaqish; Chad Livasy; Lisa A Carey; Evangeline Reynolds; Lynn Dressler; Andrew Nobel; Joel Parker; Matthew G Ewend; Lynda R Sawyer; Junyuan Wu; Yudong Liu; Rita Nanda; Maria Tretiakova; Alejandra Ruiz Orrico; Donna Dreher; Juan P Palazzo; Laurent Perreard; Edward Nelson; Mary Mone; Heidi Hansen; Michael Mullins; John F Quackenbush; Matthew J Ellis; Olufunmilayo I Olopade; Philip S Bernard; Charles M Perou
Journal:  BMC Genomics       Date:  2006-04-27       Impact factor: 3.969

10.  The non-negative matrix factorization toolbox for biological data mining.

Authors:  Yifeng Li; Alioune Ngom
Journal:  Source Code Biol Med       Date:  2013-04-16
View more
  6 in total

1.  Graph- and rule-based learning algorithms: a comprehensive review of their applications for cancer type classification and prognosis using genomic data.

Authors:  Saurav Mallik; Zhongming Zhao
Journal:  Brief Bioinform       Date:  2020-03-23       Impact factor: 11.622

2.  Sparse electrocardiogram signals recovery based on solving a row echelon-like form of system.

Authors:  Pingmei Cai; Guinan Wang; Shiwei Yu; Hongjuan Zhang; Shuxue Ding; Zikai Wu
Journal:  IET Syst Biol       Date:  2016-02       Impact factor: 1.615

3.  Using Deep Learning in a Monocentric Study to Characterize Maternal Immune Environment for Predicting Pregnancy Outcomes in the Recurrent Reproductive Failure Patients.

Authors:  Chunyu Huang; Zheng Xiang; Yongnu Zhang; Dao Shen Tan; Chun Kit Yip; Zhiqiang Liu; Yuye Li; Shuyi Yu; Lianghui Diao; Lap Yan Wong; Wai Lim Ling; Yong Zeng; Wenwei Tu
Journal:  Front Immunol       Date:  2021-04-01       Impact factor: 7.561

4.  LASSO Homotopy-Based Sparse Representation Classification for fNIRS-BCI.

Authors:  Asma Gulraiz; Noman Naseer; Hammad Nazeer; Muhammad Jawad Khan; Rayyan Azam Khan; Umar Shahbaz Khan
Journal:  Sensors (Basel)       Date:  2022-03-28       Impact factor: 3.576

5.  Robust sparse canonical correlation analysis.

Authors:  Ines Wilms; Christophe Croux
Journal:  BMC Syst Biol       Date:  2016-08-11

6.  Sparse multiple co-Inertia analysis with application to integrative analysis of multi -Omics data.

Authors:  Eun Jeong Min; Qi Long
Journal:  BMC Bioinformatics       Date:  2020-04-15       Impact factor: 3.169

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.