Literature DB >> 32788601

Multi-view clustering for multi-omics data using unified embedding.

Sayantan Mitra1, Sriparna Saha2, Mohammed Hasanuzzaman3.   

Abstract

In real world applications, data sets are often comprised of multiple views, which provide consensus and complementary information to each other. Embedding learning is an effective strategy for nearest neighbour search and dimensionality reduction in large data sets. This paper attempts to learn a unified probability distribution of the points across different views and generates a unified embedding in a low-dimensional space to optimally preserve neighbourhood identity. Probability distributions generated for each point for each view are combined by conflation method to create a single unified distribution. The goal is to approximate this unified distribution as much as possible when a similar operation is performed on the embedded space. As a cost function, the sum of Kullback-Leibler divergence over the samples is used, which leads to a simple gradient adjusting the position of the samples in the embedded space. The proposed methodology can generate embedding from both complete and incomplete multi-view data sets. Finally, a multi-objective clustering technique (AMOSA) is applied to group the samples in the embedded space. The proposed methodology, Multi-view Neighbourhood Embedding (MvNE), shows an improvement of approximately 2-3% over state-of-the-art models when evaluated on 10 omics data sets.

Entities:  

Year:  2020        PMID: 32788601      PMCID: PMC7423957          DOI: 10.1038/s41598-020-70229-1

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Introduction

Modern data sets are usually comprised of multiple distinct feature representations, often referred to as multi-view data, providing consistent and complementary information[1]. For example, in the case of multilingual data, each language represents a separate view; in a biomedical data repository, a clinical sample record[2] may include patient information, gene expression intensity and clinical traits etc. By exploiting the characteristics of different views, multi-view learning can obtain better performance over single view learning[1]. Multi-view clustering provides a natural way of generating clusters from multi-view data and has attracted considerable attention. Multi-view learning has started with Canonical correlation analysis (CCA)[3] and a series of works on co-training methods[4-7]. Co-training maximizes the mutual agreement amongst different views in a semi-supervised setting. The reasons of its success have been investigated by[8] and[9]. According to the mechanisms and principles, multiview clustering methods can be broadly divided into four typical classes; (i) subspace-based: these models learn a unified feature representation from all the views.[10-16]; (ii) late fusion based: model under this category combines the clustering results from multiple views to obtain the final clustering[16-18]; (iii) co-training based: methods under this category treats multi-view data by using co-training strategy; (iv) spectral based: under this category, methods learn an optimal similarity matrix to capture the structure of the clusters, which serves as an affinity matrix for spectral clustering[19-21]. Amongst these wide variety of multi-view clustering methods, subspace ones perform better and are widely studied. They attempt to exploit the subspace by adopting different techniques, including canonical correlation analysis (CCA)[22-25], aims at finding linear projections of different views with maximal mutual correlation, structured sparsity[26], Gaussian process[27,28], kernel embedding[29] and non-negative matrix factorization (NMF)[30,31]. These embedding techniques learn the latent feature set either by using co-variance or affinity matrix, or, by directly projecting or factorizing the feature sets to desired latent space, ignoring the data similarity ranking. Cao et al. proposed Diversity-induced Multi-view Subspace Clustering (DiMSC)[32] that exploits the complementary information from different views. For enforcing diversity the algorithm uses Hilbert-Schmidt Independence Criterion (HSIC). Xie et al. proposed[33], a tensor-Singular Value Decomposition (t-SVD) based multiview subscpace clustering. It uses tensor multi-rank to capture the complementary information between the views and solve the multi-view clustering problem as an optimization problem. Zhang et al.[34] proposed two variations of Latent Multi-View Subspace Clustering (LMSC) algorithm, linear LMSC (lLMSC) and generalized LMSC (gLMSC). lLMSC uses a linear correlation between each view and the latent representation, and gLMSC uses neural networks to obtain the generalized relationships between the views. Clustering has many important applications in the field of biology and medicine[35]. The rapid development of high throughput technology makes available a large number of different omics data to study the biological problems[36]. It is possible to collect multiple omics data for the same person. Through examining the different omics data, it is possible to distinguish reliably between different categories of cancer[37]. Integration of multiple omics data can better understand the underlying molecular mechanism in complex biological processes, and therefore offers more sophisticated ways to address biological or medical issues[38,39]. Thus, compared to single data types, multi-omics methods achieve better performance. So far, a lot of approaches for data integration have been suggested in the literature. These data integration methods mainly depend on two strategies: (i) space projection method[40], and (ii) metric (similarity measures) fusion technique[41]. Nevertheless, these methods follow very different approaches to obtain the patterns of samples or genes from multiple data domains. Earlier methods have utilized high correlated samples in the datasets to identify the multi-dimensional genomic modules[42-44]. However, these “co-modules” can only detect the sample structures across data types and may lead to biased clustering[45]. Shen et al. proposed iCluster which obtains the cluster-structures from multi-omics data by using a joint latent variable template. Mo et al. developed iClusterPlus[46], the iCluster extension, which uses linear regression models to learn different properties of omics data. However, the major drawback of this method is that it holds some strong assumptions which may fail to capture meaningful biological information. SNF (similarity network fusion)[41] can resolve such problems as an almost assumption-free and rapid approach and uses local structure preservation method to modify sample similarity networks for each type of data. But, SNF can only characterize pair-wise similarity (e.g., Euclidean distance) in the samples, and is sensitive to local data noises or outliers. Further, pair-wise similarity can’t capture the true underlying structure in different subspaces, leading to inaccurate clustering. Nguyen et al. proposed PINS[47], to identify clusters that are stable in response to repeated perturbation of the data. It integrates clusters by examining their connectivity matrices for the different omics data. Mitra et al.[48] proposed an ensemble-based multi-objective multi-view algorithm for classifying patient data. This method is computationally very expensive. One drawback common to all these algorithms is that they treat all omics equally, which may not be biologically appropriate. As a result, the clusters discovered are often poorly associated with patient outcomes. Thus, there is a scarcity of more effective integration approach. Different views of the data sets are combined in the probabilistic space by conflation method. The low-dimensional embedding is generated by approximating the combined probability distribution in the lower-dimensional space. For patient stratification, multiple omics data can unfold more precise structure in the samples, that are not possible to disclose using single omic data. Combined information from multiple omics improves the performance of the clustering algorithm. Some of the advantages of using multi-omics clustering are as follows: (i) it reduces the effect of noise in the data, (ii) each omic can reveal structures that are not present in other omics, (iii) different omics can unfold different cellular aspects. Motivated by the above requirements, in this paper, we have proposed a probabilistic approach to map the high dimensional multi-omics data to a low dimensional unified embedding preserving the neighbourhood identity across the views. It is meaningful to obtain an integrated heterogeneous feature set in a probabilistic model because different properties of the data, like, variance and mean, can be combined effectively in a probability space. Under each view in the higher dimensional space, a Gaussian is centered on every sample, and the densities under this Gaussian are used to generate a probability distribution over all the potential neighbours of the sample. The different probability distributions of each sample across different views are combined by conflation[49]. The aim is to approximate this unified distribution as much as possible when a similar operation is carried out in the embedded domain. Intuitively, a probabilistic embedding framework is a more conscientious approach because it circumvents the problems of different representations and incomparable scales. Further, we have applied multi-objective clustering to cluster the obtained embedded data sets. The main advantage of this technique is that it is capable of extracting different shaped clusters present in a data set. The general overview of the proposed methodology is shown in Fig. 1.
Figure 1

Different views of the data sets are combined in the probabilistic space by conflation method. The low-dimensional embedding is generated by approximating the combined probability distribution in the lower-dimensional space.

The proposed model MvNE (Multi-view Neighbourhood Embedding) is evaluated on 10 cancer data sets and results are compared with state-of-the-art methods. Some of the benefits of the proposed methodology are as follows: MvNE combines the views in the probability space. Combination of the views in the probability space preserves various statistical properties of the individual views. Conflations of normal distributions coincide with the classical weighted least squares method, hence yielding best linear unbiased and maximum likelihood estimators. The use of this method provides a weighted combination of several views which is an important criterion for view combination. Hence, it reduces the overhead of finding optimal weights for each view. The proposed methodology is extended to handle the datasets having incomplete views. To the best of our knowledge, the current work is the first attempt in combining multiple omics data in the probability space in biomedical domain. To the best of our knowledge, conflation method for combining multiple views was never used in the literature.

Methods

Under this section, we have described the process of generating unified embedding to handle the multi-view data sets.

Problem statement

Given a data set with n samples, we use , to represent the view of the data set with feature dimensions. The task is to obtain an embedding in the lower dimension, , by unifying all m number of views, and categorizing it into C classes. Here, is optimized so that the sum of the Kullback-Leibler divergence between the two distributions (computed from higher dimension, , and lower dimension, ) is minimized.

Conflation of probability

The conflation is defined as the distribution determined by the normalized product of the probability density or probability mass functions[49]. It can be easily calculated and also minimizes the maximum loss in Shannon information in combining several independent distributions into a single distribution. The conflation of normal distributions produces the classical weighted mean squares and the maximum likelihood estimators for normally-distributed unbiased samples[49]. The traditional methods of combining probability distributions, viz., averaging the probabilities and averaging the data, are as follows: Example of conflation technique. The red curves are the two independent distributions, yellow curve is the probability distribution obtained by averaging the probabilities, blue curve is the probability distribution obtained by averaging the data and green curve denotes the distribution obtained by conflation technique.

Averaging the probabilities

One of the common methods of combining the probabilities is by averaging them over every set of values, . This method has a significant disadvantage. Firstly, the mean of the combined distribution is always exactly the average of the means, independent of the relative accuracy or variance of each. It is unreasonable to weight the two distributions equally. Secondly, it generates a multi-modal distribution, whereas the desired output distribution should be in the form as that of the input data—normal, or at least unimodal.

Averaging the data

Another common method that does preserve the normality is to average the data. Here P is calculated on . Again, the main disadvantage of this method is that the obtained distribution has the mean exactly the average of the means of the input distributions, irrespective of the relative accuracies or variances of the two inputs (shown in Fig. 2). The variance of P is never larger than the maximum variance of and .
Figure 2

Example of conflation technique. The red curves are the two independent distributions, yellow curve is the probability distribution obtained by averaging the probabilities, blue curve is the probability distribution obtained by averaging the data and green curve denotes the distribution obtained by conflation technique.

The conflation of probabilities (denoted by symbol “&”) is a method for consolidating uniformly weighted data. If and have probability mass functions of and , respectively, then conflation is denoted as follows:In Fig. 2, we have shown the comparison between conflation, averaging the probabilities and averaging the data methods. Initially, it may seem counter-intuitive that conflation of the two distributions produces a much narrower curve. However, if the two measurements obtained from different sources are assumed equally valid, then the overlap region between the two distributions contains the real value with relatively high probability.

Generation of initial unified data set by combining all views

Initially, we generate a unified data set, by concatenating the views, , such that . For the points not appearing in all the views, we have replaced the missing features with zeros. After obtaining X, we have used a stacked autoencoder (SAE) to obtain an unified representation of the data set, . Here, represents the feature dimension in the embedded domain. Recent research has shown that SAE consistently produces well separated and semantically meaningful representation on real-world data[50]. The SAE comprises of an autoencoder part and a decoder part. Suppose, we are having a three layer architecture: an input layer, three hidden layers and an output layer. In Fig. 3, the “feature” layer is the bottleneck i.e., the output of this layer is the required embedding. In the input layer, we provide the original sample vector as input. For example, we provide a vector of size 200 as input and want the embedding to be of size 30. Then the input and output layers will have size 200 and the bottleneck layer will have size, 30. The input vector is first squeezed to the size of 30 and then we reconstruct the 200 sized vector from this size of 30. The reconstructed vector, i.e., the value of the output layer should be similar to the input vector. For this, we have used mean squared error between the input and output vectors as the loss function. Once the network is trained, the decoder part is discarded and only the encoder part is used to generate the embedding. The details of the network is explained below.
Figure 3

Network structure of the stacked autoencoder. Output of the “feature” layer is the

Each layer of SAE is a denoising autoencoder, trained to reconstruct the output of the previous layer after random corruption[50]. The denoising autoencoder is defined as follows:Here, Dropout(.) randomly sets a part of input dimension to 0. and are the activation functions for encoder and decoder, respectively. For training the network, the least-square loss, is minimized. As the activation function, we have used rectified linear units (ReLUs)[51], for every encoder/decoder pair. After training, all the encoder and decoder layers are concatenated together, to form a deep autoencoder. The schematic of the deep autoencoder is shown in Figure 3. It is a multilayer deep autoencoder having a bottleneck coding layer in the middle. The reconstructed network is then fine-tuned to minimize the reconstruction loss. We have discarded the decoder layer and used the encoder to generate the initial embedding, . Network structure of the stacked autoencoder. Output of the “feature” layer is the

Generation of unified probability distribution

For each sample point, i, and its potential neighbour, j, in the view, v, the symmetric probability, , that i selects j as its neighbour is given by Eqn 6:Here,The dissimilarity, is computed by the Eq. 8. It is the scaled squared Euclidean distance between the high dimensional samples, and , in the view v.Here is generated as such that the entropy of distribution over neighbors equals to [52]. Here, k is the effective number of nearest neighbors. After obtaining the Gaussian distribution of each sample point in each view, the combined probability for each sample is generated by conflation method[49] shown in Eq. 9.By the basic properties of conflation[53], the obtained unified probability, , is the weighted-squared-mean of the , and is normal. The obtained P is further symmetrized by the Eq. 10.In the embedded dimension, the induced probability, , that point will pick point as neighbour, is calculated by a Student t-distribution with one degree of freedom[54], given in Eq. 11. Student t-distribution is used instead of Gaussian because the density of a point is evaluated much faster under Student t-distribution since it does not involve any exponential.To find the optimal embedding, the sum of the Kullback-Leibler divergence between the two distributions is minimized as given in Eq. 12:

Extension to incomplete multi-view data-set

In this section, we have shown how our proposed algorithm can be extended for incomplete view settings. In case of incomplete view data, all samples do not appear in every view. The only change that we have to make is in the generation of unified probability, . For generating under incomplete view settings, we have used the Eq. 13. When all the views are complete, Eq. 13 is reduced to Eq. 9. From the equation, it can be seen that for the samples occurring in more than one view we have used conflation, but for the points occurring exactly in a single view, we have used the original probability as generated by Eq.6.Here, is the set of points occurring in more than 1 view. Rest of the methodologies are similar to those of the complete view setting. Finally, to obtain the optimal embedding, Eq. 12 is minimized. The generation of unified view is explained with examples in the supplementary file.

Generation of the final embedding

After obtaining an unified probability distribution for each sample point in the data set (in Section ) and a combined data set (in Section ), an unified final embedding is generated. At first, instead of randomly initializing , we initialize it with . We start by calculating using Eq. 11 for , and try to minimize the KL-divergence in Eq. 12. Using stochastic gradient descent the values of Y are optimized. Finally, the embedding is obtained. The working methodology of the proposed algorithm is shown in Algorithms 1 and 2.

Optimization

The KL divergence between the two joint probability distributions, P and Q, is given by Eq. 12. The equation can be written as:Before performing the derivation, we define the following termsNow, for change in , the pairwise distances that may change are and , . Hence, the gradient of the cost function, C, with respect to is given by: is obtained by differentiating KL-divergence in Eq. 14.For k = i and l = j, the gradient is non zero and also . Hence, the gradient, , is given by,Substituting this in Eq. 18, we have the final gradient as:

Generation of clusters

Here, we have discussed about the algorithm used to generate the clusters from the embedded sample points.

Multi-objective clustering technique

Multi-objective optimization (MOO) based clustering algorithms are better in capturing clusters having different shapes[55] and can detect the number of clusters automatically from the data set. For our experiment, we have used the algorithm, Archived Multi-objective Simulated Annealing (AMOSA), similar to that used in[56]. We have used center based encoding. The centers of the clusters are encoded in a solution to represent a partitioning. The concepts of variable length encoding are used to automatically identify the number of clusters from a data set. The number of clusters in different solutions are varied over a range. The choice of the algorithm is not restricted to this, any other MOO based algorithm can be used. Objective functions used For our experiments, we have simultaneously optimized two objective functions, viz., Xie-Beni(XB) index[57] and PBM index[58]. XB-index[57] computes the ratio between the cluster compactness and cluster separation. The minimum value of XB-index represents the optimal partitioning. where K = number of clusters = distance between the cluster center and the points within the cluster, = distance between cluster centers. PBM index[58] is defined as follows: Here, K = number of clusters, and . Here, = point of the cluster, = the center of the cluster and = samples in the cluster. Maximum value of PBM index corresponds to the optimal number of clusters. Mutation operator There are three mutation operators to explore the search space: Normal mutation Under this technique, a cluster center is randomly selected and its feature values are replaced by random values drawn from the Laplacian distribution. The distribution is given by, with = old value at the cluster center and = 1.0 (sets the magnitude of perturbation). Insert mutation Under this case, a set of solutions are randomly selected and the corresponding number of clusters are increased by 1. Delete mutation Under this case, a set of solutions are randomly selected and the corresponding number of clusters are decreased by 1.

Results

This section gives an overview of the datasets used in our experiment, experimental setup and some of the results obtained.

Data sets

For performance evaluation of MvNE, experiments are performed on 10 benchmark omics data sets downloaded from https://tcga-data.nci.nih.gov/tcga/. For all the data sets, following three views are used: gene expression, miRNA expression and DNA Methylation. Specifications of the datasets are shown in Table 1. Clinical data is used for obtaining the clusters from each dataset. Brief descriptions of the datasets are given below:
Table 1

Description of the different views of the data sets. The numbers in the brackets are selected features.

DatasetNo. of featuresSamples#Clusters
Gene expressionmiRNA expressionDNA methylation
BRCA20510 (400)1046 (220)4885 (400)6844
GBM12042 (400)534 (110)5000 (400)2744
OVG12043 (400)800 (190)5000 (400)2913
COAD20351 (400)705 (170)5000 (400)2214
LIHC20531 (400)705 (170)5000 (400)4104
LUSC20531 (400)705 (170)5000 (400)3444
SKCM20531 (400)705 (170)5000 (400)4504
SARC20531 (400)1046 (241)5000 (400)2174
KIRC20531 (400)705 (170)5000 (400)2124
AML20531 (400)705 (168)5000 (400)1702
Description of the different views of the data sets. The numbers in the brackets are selected features.

Breast cancer (BRCA)

This Breast cancer dataset contains samples from patients. The breast cancer dataset have 4 clusters: Her2, Basal,LumA, LumB[59,60].

Glioblastoma multiforme (GBM)

This dataset cotai patient samples suffering from GBM. GBM can be classified into four different types[61]: Classical, Mesenchymal, Neural and Proneural.

Ovarian cancer (OVG)

Details of the patients having ovarian serous cystadenocarcinoma tumors are listed in this dataset. Based on the cancer stages thre are three groups: stage II/A/B/C ; III/A/B/C and IV.

Colon cancer (COAD)

Data from patients suffering from Colon Adenocarcinoma are present here. Based on different stages of the cancer it is divided into 4 groups: stage I/A/B/C; II/A/B/C; III/A/B/C and IV/A/B.

Liver hepatocellular carcinoma(LIHC)

Data from patients suffering from liver cancer are listed here. Based on different stages of the cancer it is divided into 4 groups: I; II; III/A/B/C; and IV.

Lung squamous cell carcinoma (LUSC)

Data from patients suffering from lung cancer are present here. Based on different stages of the cancer it is divided into 4 groups: I/A/B/C; II/A/B; III; and IV.

Skin cutaneous melanoma(SKCM)

Data from patients suffering from Melanoma cancer is present in the dataset. Based on different stages of the cancer it is divided into 4 groups: 0, II/A/B/C; III/A/B/C; and IV/A/B/C.

Sarcoma (SARC)

SARC contains samples from patients suffering from sarcoma. Based on the cancer types it has four main groups: Leiomyosarcoma, Dedifferentiated liposarcoma, Pleomorphic MFH and Myxofibrosarcoma.

Kidney renal clear cell carcinoma (KIRC)

KIRC contains samples from patients suffering from kidney cancer. Based on different stages of the cancer it is divided into 4 groups: I/A/B/C, II/A/B/C; III/A/B/C; and IV/A/B/C..

Acute myeloid leukemia (AML)

AML dataset contains samples from patients suffering from Leukemia. Based on the type of cancer it is divided into 2 groups: Qiet and Kirc+.

Comparing methods

Under this section, we have discussed about the algorithms that were used for comparison. Two baseline methods are developed based on the different techniques for combining probabilities. The details are as follows: MCCA[62]: Canonical correlation analysis (CCA) works with only two views. Witten and Tibshirani proposed sparse multiple CCA (MCCA) which supports more than two views. It operates by maximizing the pairwise correlation between projections and CCA-RLS[63]. MultiNMF[11]: It performs NMF on each view individually: Each omic is factorized into . The omics are then integrated by enforcing the constraint that the matrices are close to the “concensus” matrix W. DiMSC[32]: Diversity-induced Multi-view Subspace Clustering (DiMSC) uses Hilbert Schmidt Independence Criterion (HSIC) as diversity term to exploit the complementary information between different views. LRACluster[64]: This uses a latent sample representation to determine the distribution of the features. It optimizes a convex objective and offers a solution that is globally optimal. PINS[47]: To combine clusters of different views, it uses a connectivity matrix. The number of the clusters is chosen in such a way that the perturbation is robust. Perturbation is obtained by adding Gaussian noise to the data. SNF[41]: It is a similarity-based approach that generates a similarity network separately for each view. Such networks are fused together by an iterative process. iClusterBayes[65]: It uses Bayesian regularization based joint latent-variable model to detect the clusters from multi-omics data. MVDA[44]: In this approach, the information from various data layers (views) is incorporated at the result stage of each single view clustering iteration. This functions by factorizing the membership matrices in a late integration manner. MvNE: Our proposed multi-view clustering methodology uses conflation method to combine the views in the probabilistic domain and generates an unified embedding. We have applied multi-objective optimization algorithm, AMOSA[56], on the embedded data sets to obtain the clusters. AMOSA automatically determines the number of clusters from the data set. From the obtained Pareto-front, we have reported the results of the solutions which have high NMI values. AvgProb: As a baseline method, we have at first generated a probability distribution of the samples on each view and then combined the distributions by considering the average of the probabilities over the views. Final embedding is generated by minimizing the KL divergence between the obtained average probability and the probability in embedded domain. AvgData: As a baseline method, we have generated a combined distance matrix by considering the average of distance matrices from all the three views. This probability distribution in the higher dimension is generated from this combined distance matrix. Final embedding is generated by minimizing the KL divergence between the generated probability and the probability in the embedded domain.

Preprocessing of data sets

Most omics data sets have a much smaller number of samples than the number of features. To manage different distributions, feature normalization in different omics data is important. In addition, dimensionality reduction/ feature selection is necessary to provide equal opportunities to different omics data in clustering process. Reduction of dimensionality is also important for retaining the most significant features, reducing computation load. We have used an unsupervised technique for choice of features, variance ranking, in our approach. We have measured the variance of each feature for this. For gene expression and DNA methylation data sets, top 400 features with highest variance scores are selected. For miRNA sequence, top 22–24% features are selected. This is because miRNA sequences have less number of features compared to other two.

Experimental settings

For state-of-the-art methods, we have used the codes released by corresponding authors. For our method, empirically we have selected the size of low dimensional embedding, dim, as 80 for all the data sets. The size of the nearest neighbour, k, is set to 30 empirically for all data sets. The total iteration of gradient descent is set to 2000, initial momentum is set to 0.5 and final momentum is set to 0.9. Initially learning rate () is set to 200, and after every iteration, it is updated by adaptive learning rate scheme described by Jacobs et al.[66].

Evaluation metrics

Two measurement indices, normalized mutual information (NMI)[67] and adjusted rand index (ARI)[68] are used to compare MvNE with other approaches. Both metrics measure the differences between the real and the predicted partitions; higher values indicate more similarity with the predicted group.Here C and E are the true class labels and cluster labels, respectively. I(.) is the mutual information, H(.) is the entropy.

Parameters study

There are two main parameters in the proposed methodology, i.e., the size of the k nearest neighbors (k) and the unified embedding dimension (dim). Under this section, we have analyzed the performance of MvNE with changes in these parameters. Results on all the ten data sets are reported in Figs. 4 and 5.
Figure 4

Change in NMI(%) with changes in k.

Figure 5

Change in NMI(%) with the changing dimension (dim) of the embedded dataset.

From Fig. 4 it is evident that, when k is too small, the probability distribution has very little information regarding the global structure of the clusters and it is too much focused on the local structure which causes the clusters to break into several sub clusters deteriorating the performance of the algorithm. If the k is too large, it fails to capture the structures of the clusters properly, causing merging of clusters and the algorithm is not stable when k is large. Empirically we have set the value of k to 30 for all data sets. Figure 5 shows that, when dim is too small, unified embedding fails to capture enough information to reflect the structure of the data set. When it is too large, the performance degrades. One of the reasons for this is the use of Student-t distribution with one degree of freedom. With the increase in dimension it fails to preserve the local structure of the data set because in higher dimension, the heavy tails comprise of a relatively large portion of the probability mass. Empirically we have kept the embedded dimension to 80 for all data sets. In SAE, we have kept the input layer size to the size of the input vectors, for example, like for gene expression values, we have kept input dimension size to 400. There are three hidden layers of size, 500, 80 and 500, respectively. The output layer has same size that of the input layer. The dropout value is set to . For the multi-objective clustering algorithm, we have set the parameters in accordance to[56]. We did not tune the settings of AMOSA[69] because the main focus of the paper is on generating the optimal embedding. The parameter settings are as follows: , , , rate of cooling, Min clusters, Max clusters, and . The algorithm is executed for 20 times. Change in NMI(%) with changes in k. Change in NMI(%) with the changing dimension (dim) of the embedded dataset.

Gene marker identification

BRCA data set includes four groups of patients, i.e., LumB, LumA, Her2 and Basal. A binary classification problem is solved to identify the most significant genes from each class. Two groups, one with samples from one class and the other with samples from other classes are formed. Signal-to-noise ratio (SNR)[70] is determined for each gene after considering both classes. It is described as,Here is the mean and is the standard deviation of each class. Genes with high SNR values correspond to high value of expression for the class to which they belong and vice versa. Finally, 5 up regulated (high SNR) and 5 down regulated (lowest SNR values) genes are selected from SNR list. Table 2 shows the list of selected gene markers for all the classes.
Table 2

5 up regulated and 5 down regulated Gene markers for BRCA dataset.

LumALumBHer2Basal
Up regulated genes
CIRBPARL6IP1ERBB2DSC2
TENC1PTGES3GGCTYBX1
KIF13BPCNAACTR3FOXC1
COL14A1PLEKHF2STARD3ANP32E
NTN4SFRS1GRB7PAPSS1
Down regulated genes
TUBA1CTRIM29GREB1XBP1
FOXM1PPLESR1GATA3
MYBL2SFRP1MAPTZNF552
MKI67ZFP36L2TBC1D9MLPH
TPX2NDRG2BCL2FOXA1
5 up regulated and 5 down regulated Gene markers for BRCA dataset. Heatmap showing the levels of expression of selected gene markers in the BRCA dataset for each subclass. Gene expression profile plot in the BRCA dataset for each subclass. The p-values obtained on comparing MvNE with other comparing methods in terms of NMI.

Statistical significance test

The significance test is carried out using one-way Analysis of Variance (ANOVA) test at significance level. The results obtained by our proposed methodology MvNE over 20 runs are compared with other algorithms. The p-values are shown in the Table 3. The reported p-values indicate that statistically significant results are obtained.
Table 3

The p-values obtained on comparing MvNE with other comparing methods in terms of NMI.

BRCAGBMOVGCOADLIHCLUSCSKCMSARCKIRCAML
MCCA0.00140.00910.00310.01840.01510.00730.00430.02690.03020.02546
MultiNMF0.00860.00760.00510.03030.00540.00190.00890.02140.02750.0139
DiMSC0.00560.00340.00510.02910.01040.00780.00350.02350.02070.0201
LRAcluster0.00360.02110.02150.00120.00360.00510.00620.00840.00510.0167
PINS0.00440.01120.00080.02730.00890.00450.02440.00620.00650.0163
SNF0.01260.00570.00860.00760.02770.00620.01190.006340.00260.0062
iClusterBayes0.00450.01180.00860.01240.03570.00230.00760.00480.00340.0043
MVDA0.00420.01320.00710.00510.00430.00730.00090.00150.00210.0077
AvgProb0.00510.01780.00120.00120.00570.00310.00640.00680.00740.0053
AvgData0.00610.01130.00910.00510.00490.00280.00980.00410.00920.0062

Discussion

In Tables 4 and 5, we have compared the NMI and Adjusted Rand Index (ARI) values over 10 omics data sets obtained by different clustering methods. In terms of NMI and ARI, our proposed methodology, MvNE, shows an improvement of 2–3% and 1.8–2.9% over all the data sets with respect to state-of-the-art algorithms, respectively. The maximum NMI and ARI values are marked in bold in Tables 4 and 5 respectively.
Table 4

Comparison results in terms of NMI.

BRCAGBMOVGCOADLIHCLUSCSKCMSARCKIRCAML
MvNE0.4192(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.15$$\end{document}±0.15)0.4449(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.18$$\end{document}±0.18)0.1247(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.01$$\end{document}±0.01)0.1151(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.02$$\end{document}±0.02)0.1024(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.02$$\end{document}±0.02)0.2816(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.02$$\end{document}±0.02)0.1359(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.01$$\end{document}±0.01)0.1512 (\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.01$$\end{document}±0.01)0.1137(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.01$$\end{document}±0.01)0.4507(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.11$$\end{document}±0.11)
MCCA0.20860.28650.07310.07840.05460.20310.09520.09930.07650.3046
MultiNMF0.30010.36060.07130.06570.04890.24010.09810.08360.07550.2787
DiMSC0.38560.40890.10150.09170.08060.25980.09960.10510.08920.3166
LRAcluster0.01460.05320.03040.03280.05730.06720.04830.04750.03890.3629
PINS0.01180.01530.00950.04590.03480.02370.03820.02620.02790.2219
SNF0.35810.0260.00680.03320.01290.00820.00880.02330.09080.4349
iClusterBayes0.01210.03060.00810.01060.02580.01120.00440.01770.01080.0894
MVDA0.39120.42130.10630.09930.07750.26940.11950.11210.09550.2871
AvgProb0.01190.02810.00920.01510.02610.01080.00380.01070.01090.0604
AvgData0.38040.40530.10350.091930.07050.26440.10070.08910.10080.3014
Table 5

Comparison results in terms of ARI.

BRCAGBMOVGCOADLIHCLUSCSKCMSARCKIRCAML
MvNE0.2623(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.02$$\end{document}±0.02)0.3609(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.013 $$\end{document}±0.013)0.0957(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.01$$\end{document}±0.01)0.0561(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.01$$\end{document}±0.01)0.0215(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.004$$\end{document}±0.004)0.1571(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.03$$\end{document}±0.03)0.0560(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.02$$\end{document}±0.02)0.0715(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.01$$\end{document}±0.01)0.0937(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.01$$\end{document}±0.01)0.3915(\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\pm 0.14$$\end{document}±0.14)
MCCA0.19030.22430.030310.01820.00310.10120.00930.01880.01950.1846
MultiNMF0.21070.254760.041120.01630.00410.11070.01020.02080.02030.1964
DiMSC0.21890.28060.05030.01910.00610.11420.01130.02120.03410.2137
LRAcluster0.00860.00760.00510.01840.00540.00980.00550.02630.03920.2546
PINS0.01440.00890.00450.02440.00670.00650.00160.01520.01360.1195
SNF0.01260.00270.00620.01190.00630.00260.000620.02380.01570.3667
iClusterBayes0.00450.03570.00230.00760.00480.00340.00430.03990.02880.0482
MVDA0.24570.34410.06140.01940.00850.13510.01450.023660.03550.2021
AvgProb0.0210.05120.00910.006510.00960.08510.00130.02090.02050.0508
AvgData0.21070.34040.07160.19410.00910.12610.013210.013260.012010.2709
From the results, it can be seen that iClusterBayes[65] performs poorly as it may get stuck at local optimal solutions due to complex structure. Further in Table 6, we have also reported the and Accuracy results obtained by our proposed methodology for all the 10 omic datasets.
Table 6

macro F1-score and Accuracy values obtained by MvNE for all the datasets.

Datasetsmacro F1-scoreAccuracy
BRCA0.66320.6701
GBM0.68010.6918
OVG0.48830.4765
COAD0.45140.4531
LIHC0.44570.4612
LUSC0.58560.5771
SKCM0.50310.5118
SARC0.55030.5517
KIRC0.46750.4718
AML0.69040.7013
The baseline method, AvgProb, performs poorly as average of the probabilities failed to capture the structure of the probability distribution as discussed under the “Methods”. MvNE outperforms both the baseline methods, AvgData and AvgProb, showing the superiority of the conflation method. Our hypothesis of generating subspaces by combining different views in the probabilistic domain proves effective with the results obtained. The BRCA dataset has 4 classes, so a total of 40 genes with 20 down-regulated genes and 20 up-regulated are obtained. Fig 6 shows the heatmap plot of these genes. Here, red means higher levels of expression values, green means lower levels of expression, and black means moderate levels of expression. Fig 6 also indicates that genes known for a specific tumor class are either down-regulated or up-regulated.
Figure 6

Heatmap showing the levels of expression of selected gene markers in the BRCA dataset for each subclass.

We have listed the gene expression profile plot for each BRCA dataset group in Fig 7. The structure compactness shows that the clustered samples have the same form of gene expression, i.e., there is a strong continuity between them within a cluster sequence.
Figure 7

Gene expression profile plot in the BRCA dataset for each subclass.

In Table 3, we have reported the p-values obtained by our proposed model when compared with other state-of-the-art and baseline models. These results are below significance level. This shows that performance improvements obtained by our proposed model, MvNE, are statistically significant. Comparison results in terms of NMI. Comparison results in terms of ARI. macro F1-score and Accuracy values obtained by MvNE for all the datasets.

Theoretical analysis

Time complexity

The proposed methodology can be divided into three parts, viz., generation of the initial embedding using SAE, generation of the final low dimensional embedding and finally the AMOSA algorithm for clustering. The time complexity of each part of the algorithm is as follows: Time complexity of SAE In our proposed approach, we have used 3 hidden layers. The time complexity of matrix multiplication is is . Since, we have total 4 layers (3 hidden layers and 1 output layer), so we require total 4 weight matrices to compute the output layer, say, and . For a training sample t, number of iterations as n and input vector of size, i; the time complexity for total forward and backward pass is typically: However, we have parallelized the SAE by using GPUs. Generation of embedding from high dimensional space The time complexity of generating the low dimensional embedding for N number of samples is . So, for very large dataset, this method is very slow. In the supplementary file, we have shown results on large datasets having more than 3000 samples. But this algorithm is best suited for biological datasets where less number of samples are available. Time complexity of AMOSA For a population size of P, iteration of iter and N number of samples, AMOSA has time complexity of

Convergence analysis

For convergence analysis of our algorithm, we have shown the error plots for all the datasets while generating the low dimensional embedding. From Fig. 8, it can be observed that for 1000 iterations, there is a monotonic decrease in the error value for all the datasets. This shows the convergence of our proposed methodology.
Figure 8

Error plot for low dimension generation.

Error plot for low dimension generation.

Conclusion

In this paper, we have proposed an unsupervised probabilistic approach to generate an unified neighbourhood embedding for multi-view data sets. The proposed methodology combines the multiple omics data in the probability space and then generates an unified embedding preserving the statistical properties of each view as well as the combined neighbourhood information of the samples. Assigning equal weightage to each view is not very likely for solving patient classification problems. One of the key benefits of our proposed methodology is that it utilizes a weighted combinations of views. The conflation method used here for combining different omics data, automatically assigns high weightage to the more accurate omics data. Another advantage of the proposed methodology is that it can handle data having incomplete views, i.e., missing samples in some views. The results for incomplete views are shown in the supplementary file. However, one of the major drawbacks of the proposed methodology is the time complexity of calculating the embedding in the lower dimensions. It has very high time complexity. So for large datasets, the algorithm is slow. This algorithm is best suited for medium sized datasets, like patient stratification datasets where the number of samples are generally low. Results on 10 omics datasets illustrate that our methodology provides better results. Supplementary information.
  28 in total

1.  A learning algorithm for adaptive canonical correlation analysis of several data sets.

Authors:  Javier Vía; Ignacio Santamaría; Jesús Pérez
Journal:  Neural Netw       Date:  2006-11-17

Review 2.  Extensions of sparse canonical correlation analysis with applications to genomic data.

Authors:  Daniela M Witten; Robert J Tibshirani
Journal:  Stat Appl Genet Mol Biol       Date:  2009-06-09

3.  Generalized Latent Multi-View Subspace Clustering.

Authors:  Changqing Zhang; Huazhu Fu; Qinghua Hu; Xiaochun Cao; Yuan Xie; Dacheng Tao; Dong Xu
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2018-10-23       Impact factor: 6.226

4.  Diagnosis of multiple cancer types by shrunken centroids of gene expression.

Authors:  Robert Tibshirani; Trevor Hastie; Balasubramanian Narasimhan; Gilbert Chu
Journal:  Proc Natl Acad Sci U S A       Date:  2002-05-14       Impact factor: 11.205

5.  A fully Bayesian latent variable model for integrative clustering analysis of multi-type omics data.

Authors:  Qianxing Mo; Ronglai Shen; Cui Guo; Marina Vannucci; Keith S Chan; Susan G Hilsenbeck
Journal:  Biostatistics       Date:  2018-01-01       Impact factor: 5.899

6.  Classification and diagnostic prediction of cancers using gene expression profiling and artificial neural networks.

Authors:  J Khan; J S Wei; M Ringnér; L H Saal; M Ladanyi; F Westermann; F Berthold; M Schwab; C R Antonescu; C Peterson; P S Meltzer
Journal:  Nat Med       Date:  2001-06       Impact factor: 53.440

7.  Integrated genomic analysis identifies clinically relevant subtypes of glioblastoma characterized by abnormalities in PDGFRA, IDH1, EGFR, and NF1.

Authors:  Roel G W Verhaak; Katherine A Hoadley; Elizabeth Purdom; Victoria Wang; Yuan Qi; Matthew D Wilkerson; C Ryan Miller; Li Ding; Todd Golub; Jill P Mesirov; Gabriele Alexe; Michael Lawrence; Michael O'Kelly; Pablo Tamayo; Barbara A Weir; Stacey Gabriel; Wendy Winckler; Supriya Gupta; Lakshmi Jakkula; Heidi S Feiler; J Graeme Hodgson; C David James; Jann N Sarkaria; Cameron Brennan; Ari Kahn; Paul T Spellman; Richard K Wilson; Terence P Speed; Joe W Gray; Matthew Meyerson; Gad Getz; Charles M Perou; D Neil Hayes
Journal:  Cancer Cell       Date:  2010-01-19       Impact factor: 31.743

8.  Similarity network fusion for aggregating data types on a genomic scale.

Authors:  Bo Wang; Aziz M Mezlini; Feyyaz Demir; Marc Fiume; Zhuowen Tu; Michael Brudno; Benjamin Haibe-Kains; Anna Goldenberg
Journal:  Nat Methods       Date:  2014-01-26       Impact factor: 28.547

9.  Integrative analysis for identifying joint modular patterns of gene-expression and drug-response data.

Authors:  Jinyu Chen; Shihua Zhang
Journal:  Bioinformatics       Date:  2016-02-01       Impact factor: 6.937

10.  Integrating genetic and network analysis to characterize genes related to mouse weight.

Authors:  Anatole Ghazalpour; Sudheer Doss; Bin Zhang; Susanna Wang; Christopher Plaisier; Ruth Castellanos; Alec Brozell; Eric E Schadt; Thomas A Drake; Aldons J Lusis; Steve Horvath
Journal:  PLoS Genet       Date:  2006-07-05       Impact factor: 5.917

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.