Literature DB >> 33514769

Verifying explainability of a deep learning tissue classifier trained on RNA-seq data.

Melvyn Yap1, Rebecca L Johnston2, Helena Foley1, Samual MacDonald1, Olga Kondrashova2, Khoa A Tran1,2,3, Katia Nones2, Lambros T Koufariotis2, Cameron Bean1, John V Pearson2, Maciej Trzaskowski4, Nicola Waddell5.   

Abstract

For complex machine learning (ML) algorithms to gain widespread acceptance in decision making, we must be able to identify the features driving the predictions. Explainability models allow transparency of ML algorithms, however their reliability within high-dimensional data is unclear. To test the reliability of the explainability model SHapley Additive exPlanations (SHAP), we developed a convolutional neural network to predict tissue classification from Genotype-Tissue Expression (GTEx) RNA-seq data representing 16,651 samples from 47 tissues. Our classifier achieved an average F1 score of 96.1% on held-out GTEx samples. Using SHAP values, we identified the 2423 most discriminatory genes, of which 98.6% were also identified by differential expression analysis across all tissues. The SHAP genes reflected expected biological processes involved in tissue differentiation and function. Moreover, SHAP genes clustered tissue types with superior performance when compared to all genes, genes detected by differential expression analysis, or random genes. We demonstrate the utility and reliability of SHAP to explain a deep learning model and highlight the strengths of applying ML to transcriptome data.

Entities:  

Year:  2021        PMID: 33514769     DOI: 10.1038/s41598-021-81773-9

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


  5 in total

1.  Pathway importance by graph convolutional network and Shapley additive explanations in gene expression phenotype of diffuse large B-cell lymphoma.

Authors:  Jin Hayakawa; Tomohisa Seki; Yoshimasa Kawazoe; Kazuhiko Ohe
Journal:  PLoS One       Date:  2022-06-24       Impact factor: 3.752

2.  Interpretable machine learning for genomics.

Authors:  David S Watson
Journal:  Hum Genet       Date:  2021-10-20       Impact factor: 5.881

3.  A deep learning model to classify neoplastic state and tissue origin from transcriptomic data.

Authors:  James Hong; Laureen D Hachem; Michael G Fehlings
Journal:  Sci Rep       Date:  2022-06-11       Impact factor: 4.996

4.  Regulating the Safety of Health-Related Artificial Intelligence.

Authors:  Michael Da Silva; Colleen M Flood; Anna Goldenberg; Devin Singh
Journal:  Healthc Policy       Date:  2022-05

5.  Deep learning explains the biology of branched glycans from single-cell sequencing data.

Authors:  Rui Qin; Lara K Mahal; Daniel Bojar
Journal:  iScience       Date:  2022-09-19
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.