| Literature DB >> 35832629 |
Thanh Hoa Vo1, Ngan Thi Kim Nguyen2, Quang Hien Kha3, Nguyen Quoc Khanh Le4,5,6.
Abstract
Over the past decade, polypharmacy instances have been common in multi-diseases treatment. However, unwanted drug-drug interactions (DDIs) that might cause unexpected adverse drug events (ADEs) in multiple regimens therapy remain a significant issue. Since artificial intelligence (AI) is ubiquitous today, many AI prediction models have been developed to predict DDIs to support clinicians in pharmacotherapy-related decisions. However, even though DDI prediction models have great potential for assisting physicians in polypharmacy decisions, there are still concerns regarding the reliability of AI models due to their black-box nature. Building AI models with explainable mechanisms can augment their transparency to address the above issue. Explainable AI (XAI) promotes safety and clarity by showing how decisions are made in AI models, especially in critical tasks like DDI predictions. In this review, a comprehensive overview of AI-based DDI prediction, including the publicly available source for AI-DDIs studies, the methods used in data manipulation and feature preprocessing, the XAI mechanisms to promote trust of AI, especially for critical tasks as DDIs prediction, the modeling methods, is provided. Limitations and the future directions of XAI in DDIs are also discussed.Entities:
Keywords: Chemical structures; Deep learning; Drug-drug interaction; Explainable artificial intelligence; Machine learning; Natural language processing
Year: 2022 PMID: 35832629 PMCID: PMC9092071 DOI: 10.1016/j.csbj.2022.04.021
Source DB: PubMed Journal: Comput Struct Biotechnol J ISSN: 2001-0370 Impact factor: 6.155
Fig. 1PRISMA diagram showing our literature strategy search.
Input data type of all papers reviewed in this study.
| No. | Method | Authors | Year | Input data | Algorithm | Performance |
|---|---|---|---|---|---|---|
| 1 | TML | Cheng et al. | 2014 | structure | SVM | AUC ∼ 0.565 to 0.666 |
| 2 | Hunta et al. | 2017 | structure | SVM | AUC = 0.901 | |
| 3 | Deepika et al. | 2018 | structure | meta classifier | F1-score = 0.909 | |
| 4 | Dhami et al. | 2018 | structure | kernel learning | Accuracy > 0.7 | |
| 5 | Mahadevan et al. | 2019 | structure | ensemble learning | Accuracy > 0.9 | |
| 6 | Zhang et al. | 2019 | structure | ensemble learning | AUC = 0.9951 | |
| 7 | Song et al. | 2019 | structure | SVM | AUC > 0.97 | |
| 8 | Qian et al. | 2019 | structure | gradient boosting | AUC = 0.689 | |
| 9 | Wang et al. | 2020 | structure | SVM | AUC = 0.985 | |
| 10 | Rohani et al. | 2020 | structure | integrated similarity-constrained matrix factorization | F1-score = 0.885 | |
| 11 | Zhan et al. | 2020 | structure | Bayesian networks coupled with level-wise algorithm | Precision = 0.5445 | |
| 12 | Huang et al. | 2020 | structure | Chemical Sequential Pattern Mining | AUC = 0.91 | |
| 13 | Hung et al. | 2021 | structure | ensemble learning | Accuracy = 0.7 | |
| 14 | Dang et al. | 2021 | structure | XGBoost | F1-score = 0.65 | |
| 15 | Patrick et al. | 2021 | structure | ensemble learning | AUC > 0.9 | |
| 16 | Dewulf et al. | 2021 | structure | combined multi-regression | AUC = 0.843 | |
| 17 | Mei et al. | 2021 | structure | L2-regularized logistic regression | AUC = 0.9884 | |
| 18 | Thomas et al. | 2011 | text | ensemble learning | F1-score = 0.657 | |
| 19 | Minard et al. | 2011 | text | SVM | F1-score = 0.5965 | |
| 20 | Garcia-Blasco et al. | 2011 | text | RF | F1-score = 0.6341 | |
| 21 | Boyce et al. | 2012 | text | SVM | F1-score = 0.859 | |
| 22 | Zhang et al. | 2012 | text | single kernel | AUC = 0.924 | |
| 23 | Hailu et al. | 2013 | text | SVM | F1-score = 0.5 | |
| 24 | Bjorne et al. | 2013 | text | Turku Event Extraction System | F1-score = 0.59 | |
| 25 | Bobic et al. | 2013 | text | LibLINEAR, perceptron Naïve Bayes | F1-score = 0.704 | |
| 26 | Yan et al. | 2013 | text | Drug-Entity-Topic | AUC = 0.96 | |
| 27 | Zhang et al. | 2015 | text | Label Propagation | AUC = 0.864 | |
| 28 | Ben Abacha A et al. | 2015 | text | Hybrid CRF based | F1-score = 0.6398 | |
| 29 | Bokharaeian et al. | 2016 | text | bag of word kernel | sign test p-value < 0.0001 | |
| 30 | Mahendran et al. | 2016 | text | bag of word | F1-score = 0.769 | |
| 31 | Zhang et al. | 2017 | text | ensemble learning | – | |
| 32 | Celebi et al. | 2019 | text | RF | AUC = 0.91 | |
| 33 | Javed et al. | 2021 | text | RF | Accuracy = 0.954 | |
| 34 | Xie et al. | 2021 | text | LR | Precision = 0.9 | |
| 35 | DL | Polak et al. | 2005 | structure | ANN | AUC = 0.82 |
| 36 | Herrero-Zazo et al. | 2016 | structure | ANN | F1-score = 0.64 | |
| 37 | Ryu et al. | 2018 | structure | DNN | Accuracy = 0.924 | |
| 38 | Lee et al. | 2018 | structure | RWR coupled with KNN | AUC = 0.67 | |
| 39 | Karim et al. | 2019 | structure | Graph Auto-Encoders | AUC = 0.98 | |
| 40 | Rohani et al. | 2019 | structure | ANN | AUC from 0.954 to 0.994 | |
| 41 | Lee et al. | 2019 | structure | auto-encoder coupled with a deep feed-forward network | Accuracy > 0.95 | |
| 42 | Hou et al. | 2019 | structure | DNN | AUC = 0.942 | |
| 43 | Liu et al. | 2019 | structure | multilayer bidirectional LSTM | F1-score = 0.7243 | |
| 44 | Karim et al. | 2019 | structure | Convolutional-LSTM network | F1-score = 0.92 | |
| 45 | Shukla et al. | 2019 | structure | convolutional mixture density RNN | Accuracy = 0.982 | |
| 46 | Deng et al. | 2020 | structure | Multi DNN | F1-score = 0.7585 | |
| 47 | Lin et al. | 2020 | structure | Knowledge Graph Neural Network | AUC = 0.9912 | |
| 48 | Zhang et al. | 2020 | structure | multi-modal deep auto-encoders | F1-score = 0.8498 | |
| 49 | Feng et al. | 2020 | structure | GCN-DNN | F1-score = 0.84 | |
| 50 | Shankar et al. | 2020 | structure | ANN | AUC = 0.69 | |
| 51 | Masumshah et al. | 2021 | structure | ANN | F1-score = 0.936 | |
| 52 | Zitnik et al. | 2021 | structure | spectral convolution | AUC = 0.928 | |
| 53 | Lin et al. | 2021 | structure | CNNs, auto-encoders with Siamese network | F1-score = 0.9117 | |
| 54 | Schwarz et al. | 2021 | structure | multi-modal neural network | AUPRC from 0.77 to 0.92 | |
| 55 | Luo et al. | 2021 | structure | graph convolutional auto-encoder network | – | |
| 56 | Nyamabo et al. | 2021 | structure | graph neural network | AUC = 0.9838 | |
| 57 | Chen et al. | 2021 | structure | integrated modules neural network | AUC = 0.9994 | |
| 58 | Pathak et al. | 2013 | text | Linked Data | – | |
| 59 | Zhao et al. | 2016 | text | Syntax CNN | F1-score = 0.686 | |
| 60 | Liu et al. | 2016 | text | CNN | F1-score = 0.6975 | |
| 61 | Quan et al. | 2016 | text | multichannel CNN | F1-score = 0.702 | |
| 62 | Zhang et al. | 2016 | text | SVM | F1-score = 0.8497 | |
| 63 | Suárez-Paniagua et al. | 2017 | text | CNN | F1-score = 0.6198 | |
| 64 | Zheng et al. | 2017 | text | RNN with LSTM units | F1-score = 0.773 | |
| 65 | Kavuluru et al. | 2017 | text | character-level RNNs | F1-score = 0.7081 | |
| 66 | Wang et al. | 2017 | text | RNN with LSTM and an attention mechanism | F1-score = 0.715 | |
| 67 | Yi et al. | 2017 | text | RNN | F1-score = 0.722 | |
| 68 | Jiang et al. | 2017 | text | skeleton-LSTM | F1-score = 0.714 | |
| 69 | Li et al. | 2017 | text | relation classification framework based on topic modeling | F1-score = 0.48 | |
| 70 | Wang et al. | 2017 | text | LSTM | F1-score = 0.72 | |
| 71 | Zhang et al. | 2017 | text | hierarchical RNN | F1-score = 0.729 | |
| 72 | Xu et al. | 2018 | text | bidirectional LSTM network | F1-score = 0.7115 | |
| 73 | Sun et al. | 2018 | text | Deep CNN | F1-score = 0.845 | |
| 74 | Lim et al. | 2018 | text | recursive neural network | F1-score = 0.838 | |
| 75 | Zhou et al. | 2018 | text | BiLSTM | F1-score = 0.7299 | |
| 76 | Zhang et al. | 2018 | text | RNN-CNN | F1-score = 0.648 | |
| 77 | Zitnik et al. | 2018 | text | spectral convolution | AUC = 0.928 | |
| 78 | Paniagua et al. | 2018 | text | CNN | F1-score = 0.6456 | |
| 79 | Hou et al. | 2018 | text | LSTM- DNN | F1-score = 0.875 | |
| 80 | Sahu et al. | 2018 | text | LSTM | F1-score = 0.6939 | |
| 81 | Zhang et al. | 2019 | text | variational autoencoder | F1-score = 0.579 | |
| 82 | Xiong et al. | 2019 | text | combined GCNN and BiLSTM | F1-score = 0.77 | |
| 83 | Liu et al. | 2019 | text | non-linear unsupervised neural network + RF | F1-score = 0.8498 | |
| 84 | Sun et al. | 2019 | text | recurrent hybrid CNN | F1-score = 0.7548 | |
| 85 | Shtar et al. | 2019 | text | ensemble-based classifier | AUC 0.807 to 0.990 | |
| 86 | Xu et al. | 2019 | text | full-attention network | F1-score = 0.712 | |
| 87 | Wu et al. | 2020 | text | stacked bidirectional GRU + CNN | F1-score = 0.75 | |
| 88 | Zhu et al. | 2020 | text | bidirectional transformer + BiGRU | F1-score = 0.809 | |
| 89 | Liu et al. | 2020 | text | stacked autoencoders + weighted SVM | – | |
| 90 | Park et al. | 2020 | text | Attention-based Graph Convolutional Networks | F1-score = 0.7686 | |
| 91 | Zaikis et al. | 2020 | text | stacked Bi-LSTM + CNN | – | |
| 92 | Allahgholi et al. | 2020 | text | ANN | Accuracy = 0.954 | |
| 93 | Warikoo et al. | 2020 | text | Lexically-aware Transformer-based BERT | F1-score = 0.645 | |
| 94 | Fatehifar et al. | 2021 | text | LSTM | F1-score = 0.783 |
TML: traditional machine learning, DL: deep learning, '-'the information was not reported in the original paper.
Fig. 2Overall workflow of traditional ML and DL for DDIs prediction.
Fig. 3Evolution of DDI prediction models separated by different input data and algorithms.