| Literature DB >> 34402865 |
Eloise Withnell1,2, Xiaoyu Zhang1, Kai Sun1, Yike Guo1,3.
Abstract
The lack of explainability is one of the most prominent disadvantages of deep learning applications in omics. This 'black box' problem can undermine the credibility and limit the practical implementation of biomedical deep learning models. Here we present XOmiVAE, a variational autoencoder (VAE)-based interpretable deep learning model for cancer classification using high-dimensional omics data. XOmiVAE is capable of revealing the contribution of each gene and latent dimension for each classification prediction and the correlation between each gene and each latent dimension. It is also demonstrated that XOmiVAE can explain not only the supervised classification but also the unsupervised clustering results from the deep learning network. To the best of our knowledge, XOmiVAE is one of the first activation level-based interpretable deep learning models explaining novel clusters generated by VAE. The explainable results generated by XOmiVAE were validated by both the performance of downstream tasks and the biomedical knowledge. In our experiments, XOmiVAE explanations of deep learning-based cancer classification and clustering aligned with current domain knowledge including biological annotation and academic literature, which shows great potential for novel biomedical knowledge discovery from deep learning models.Entities:
Keywords: cancer classification; deep learning; explainable artificial intelligence; gene expression; omics data
Mesh:
Substances:
Year: 2021 PMID: 34402865 PMCID: PMC8575033 DOI: 10.1093/bib/bbab315
Source DB: PubMed Journal: Brief Bioinform ISSN: 1467-5463 Impact factor: 11.622