Literature DB >> 36134335

EdgeSHAPer: Bond-centric Shapley value-based explanation method for graph neural networks.

Andrea Mastropietro1, Giuseppe Pasculli1, Christian Feldmann2, Raquel Rodríguez-Pérez2,3, Jürgen Bajorath2.   

Abstract

Graph neural networks (GNNs) recursively propagate signals along the edges of an input graph, integrate node feature information with graph structure, and learn object representations. Like other deep neural network models, GNNs have notorious black box character. For GNNs, only few approaches are available to rationalize model decisions. We introduce EdgeSHAPer, a generally applicable method for explaining GNN-based models. The approach is devised to assess edge importance for predictions. Therefore, EdgeSHAPer makes use of the Shapley value concept from game theory. For proof-of-concept, EdgeSHAPer is applied to compound activity prediction, a central task in drug discovery. EdgeSHAPer's edge centricity is relevant for molecular graphs where edges represent chemical bonds. Combined with feature mapping, EdgeSHAPer produces intuitive explanations for compound activity predictions. Compared to a popular node-centric and another edge-centric GNN explanation method, EdgeSHAPer reveals higher resolution in differentiating features determining predictions and identifies minimal pertinent positive feature sets.
© 2022 The Author(s).

Entities:  

Keywords:  Artificial intelligence; Bioinformatics; Drugs

Year:  2022        PMID: 36134335      PMCID: PMC9483788          DOI: 10.1016/j.isci.2022.105043

Source DB:  PubMed          Journal:  iScience        ISSN: 2589-0042


  21 in total

1.  The graph neural network model.

Authors:  Franco Scarselli; Marco Gori; Ah Chung Tsoi; Markus Hagenbuchner; Gabriele Monfardini
Journal:  IEEE Trans Neural Netw       Date:  2008-12-09

Review 2.  Deep learning.

Authors:  Yann LeCun; Yoshua Bengio; Geoffrey Hinton
Journal:  Nature       Date:  2015-05-28       Impact factor: 49.962

Review 3.  XAI-Explainable artificial intelligence.

Authors:  David Gunning; Mark Stefik; Jaesik Choi; Timothy Miller; Simone Stumpf; Guang-Zhong Yang
Journal:  Sci Robot       Date:  2019-12-18

4.  Interpretation of Compound Activity Predictions from Complex Machine Learning Models Using Local Approximations and Shapley Values.

Authors:  Raquel Rodríguez-Pérez; Jürgen Bajorath
Journal:  J Med Chem       Date:  2019-09-26       Impact factor: 7.446

5.  Benchmarking Molecular Feature Attribution Methods with Activity Cliffs.

Authors:  José Jiménez-Luna; Miha Skalic; Nils Weskamp
Journal:  J Chem Inf Model       Date:  2022-01-12       Impact factor: 4.956

6.  Explainability in Graph Neural Networks: A Taxonomic Survey.

Authors:  Hao Yuan; Haiyang Yu; Shurui Gui; Shuiwang Ji
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2022-09-05       Impact factor: 9.322

Review 7.  Principles and Practice of Explainable Machine Learning.

Authors:  Vaishak Belle; Ioannis Papantonis
Journal:  Front Big Data       Date:  2021-07-01

8.  Analyzing Learned Molecular Representations for Property Prediction.

Authors:  Kevin Yang; Kyle Swanson; Wengong Jin; Connor Coley; Philipp Eiden; Hua Gao; Angel Guzman-Perez; Timothy Hopper; Brian Kelley; Miriam Mathea; Andrew Palmer; Volker Settels; Tommi Jaakkola; Klavs Jensen; Regina Barzilay
Journal:  J Chem Inf Model       Date:  2019-08-13       Impact factor: 4.956

9.  GNNExplainer: Generating Explanations for Graph Neural Networks.

Authors:  Rex Ying; Dylan Bourgeois; Jiaxuan You; Marinka Zitnik; Jure Leskovec
Journal:  Adv Neural Inf Process Syst       Date:  2019-12

10.  The ChEMBL bioactivity database: an update.

Authors:  A Patrícia Bento; Anna Gaulton; Anne Hersey; Louisa J Bellis; Jon Chambers; Mark Davies; Felix A Krüger; Yvonne Light; Lora Mak; Shaun McGlinchey; Michal Nowotka; George Papadatos; Rita Santos; John P Overington
Journal:  Nucleic Acids Res       Date:  2013-11-07       Impact factor: 16.971

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.