| Literature DB >> 24078917 |
Lei Chen1, Tao Huang, Jian Zhang, Ming-Yue Zheng, Kai-Yan Feng, Yu-Dong Cai, Kuo-Chen Chou.
Abstract
A drug side effect is an undesirable effect which occurs in addition to the intended therapeutic effect of the drug. The unexpected side effects that many patients suffer from are the major causes of large-scale drug withdrawal. To address the problem, it is highly demanded by pharmaceutical industries to develop computational methods for predicting the side effects of drugs. In this study, a novel computational method was developed to predict the side effects of drug compounds by hybridizing the chemical-chemical and protein-chemical interactions. Compared to most of the previous works, our method can rank the potential side effects for any query drug according to their predicted level of risk. A training dataset and test datasets were constructed from the benchmark dataset that contains 835 drug compounds to evaluate the method. By a jackknife test on the training dataset, the 1st order prediction accuracy was 86.30%, while it was 89.16% on the test dataset. It is expected that the new method may become a useful tool for drug design, and that the findings obtained by hybridizing various interactions in a network system may provide useful insights for conducting in-depth pharmacological research as well, particularly at the level of systems biomedicine.Entities:
Mesh:
Substances:
Year: 2013 PMID: 24078917 PMCID: PMC3776367 DOI: 10.1155/2013/485034
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.411
Figure 1A histogram of the number of drugs versus the number of side effects.
The first 20 prediction accuracies of the interaction-based and similarity-based methods in identifying the side effects of drugs in the training and test datasets.
| Prediction order | Interaction-based | Similarity-based | Difference | |||
|---|---|---|---|---|---|---|
| Training dataset | Test dataset | Training dataset | Test dataset | Training dataseta | Test datasetb | |
| 1 | 86.30% | 89.16% | 83.64% | 87.95% | 2.66% | 1.20% |
| 2 | 80.45% | 83.13% | 79.12% | 83.13% | 1.33% | 0.00% |
| 3 | 77.13% | 84.34% | 75.00% | 79.52% | 2.13% | 4.82% |
| 4 | 72.61% | 81.93% | 71.41% | 75.90% | 1.20% | 6.02% |
| 5 | 73.40% | 77.11% | 68.22% | 74.70% | 5.19% | 2.41% |
| 6 | 68.75% | 75.90% | 66.89% | 71.08% | 1.86% | 4.82% |
| 7 | 67.69% | 67.47% | 64.76% | 57.83% | 2.93% | 9.64% |
| 8 | 64.23% | 65.06% | 59.97% | 65.06% | 4.26% | 0.00% |
| 9 | 63.70% | 68.67% | 58.78% | 57.83% | 4.92% | 10.84% |
| 10 | 57.71% | 57.83% | 57.31% | 60.24% | 0.40% | −2.41% |
| 11 | 59.18% | 60.24% | 56.38% | 67.47% | 2.79% | −7.23% |
| 12 | 59.18% | 69.88% | 56.65% | 51.81% | 2.53% | 18.07% |
| 13 | 57.31% | 61.45% | 54.79% | 53.01% | 2.53% | 8.43% |
| 14 | 55.85% | 59.04% | 53.86% | 62.65% | 1.99% | −3.61% |
| 15 | 54.12% | 54.22% | 50.66% | 57.83% | 3.46% | −3.61% |
| 16 | 53.86% | 59.04% | 52.66% | 55.42% | 1.20% | 3.61% |
| 17 | 51.33% | 39.76% | 50.00% | 60.24% | 1.33% | −20.48% |
| 18 | 54.52% | 62.65% | 51.73% | 53.01% | 2.79% | 9.64% |
| 19 | 50.00% | 56.63% | 50.00% | 38.55% | 0.00% | 18.07% |
| 20 | 47.21% | 44.58% | 47.74% | 51.81% | −0.53% | −7.23% |
aPercentages in this column were calculated by percentages in column 2 minus percentages in column 4.
bPercentages in this column were calculated by percentages in column 3 minus percentages in column 5.
Figure 2A plot of the prediction accuracy of two methods on the training dataset versus the order of prediction.
Figure 3A plot of the prediction accuracy of two methods on the test dataset versus the order of prediction.