| Literature DB >> 34273053 |
Eelke B Lenselink1, Pieter F W Stouten2.
Abstract
Accurate prediction of lipophilicity-logP-based on molecular structures is a well-established field. Predictions of logP are often used to drive forward drug discovery projects. Driven by the SAMPL7 challenge, in this manuscript we describe the steps that were taken to construct a novel machine learning model that can predict and generalize well. This model is based on the recently described Directed-Message Passing Neural Networks (D-MPNNs). Further enhancements included: both the inclusion of additional datasets from ChEMBL (RMSE improvement of 0.03), and the addition of helper tasks (RMSE improvement of 0.04). To the best of our knowledge, the concept of adding predictions from other models (Simulations Plus logP and logD@pH7.4, respectively) as helper tasks is novel and could be applied in a broader context. The final model that we constructed and used to participate in the challenge ranked 2/17 ranked submissions with an RMSE of 0.66, and an MAE of 0.48 (submission: Chemprop). On other datasets the model also works well, especially retrospectively applied to the SAMPL6 challenge where it would have ranked number one out of all submissions (RMSE of 0.35). Despite the fact that our model works well, we conclude with suggestions that are expected to improve the model even further.Entities:
Keywords: D-MPNN; Multitask machine learning; SAMPL7; logP prediction
Mesh:
Substances:
Year: 2021 PMID: 34273053 PMCID: PMC8367913 DOI: 10.1007/s10822-021-00405-6
Source DB: PubMed Journal: J Comput Aided Mol Des ISSN: 0920-654X Impact factor: 3.686
Overview of the optimization done on the model, performance (R2, RMSE, Spearman ρ) on the test set constructed for this challenge
| Model | Description | R2 | RMSE | Spearman ρ |
|---|---|---|---|---|
| – | AlogP | 0.83 [0.71,0.90] | 0.73 [0.55,0.93] | 0.90 [0.85,0.94] |
| – | XlogP3 | 0.85 [0.75,0.92] | 0.67 [0.48,0.87] | 0.91 [0.87,0.95] |
| – | S+ logP | 0.95 [0.91,0.97] | 0.40 [0.32,0.48] | 0.97 [0.94,0.98] |
| 1 | default | 0.93 [0.89,0.96] | 0.45 [0.36,0.57] | 0.96 [0.94,0.97] |
| 2 | 1 + rdkit | 0.93 [0.89,0.96] | 0.45 [0.37,0.55] | 0.96 [0.94,0.98] |
| 3 | rdkit only | 0.88 [0.82,0.92] | 0.60 [0.50,0.70] | 0.94 [0.91,0.96] |
| 4 | 1 + ChEMBL merged | 0.88 [0.81,0.92] | 0.60 [0.51,0.71] | 0.94 [0.92,0.96] |
| 5 | 1 + ChEMBL separate | 0.93 [0.88,0.95] | 0.47 [0.38,0.58] | 0.96 [0.94,0.98] |
| 6 | 5 + AZ_logD7.4 | 0.94 [0.91,0.96] | 0.42 [0.35,0.50] | 0.97 [0.95,0.97] |
| 7 | 5 + AZ_ADME | 0.94 [0.90,0.96] | 0.44 [0.36,0.51] | 0.97 [0.95,0.98] |
| 8 | 6 + hyperopt parameters | 0.93 [0.88,0.95] | 0.47 [0.39,0.58] | 0.96 [0.94,0.97] |
| 9 | 6 + S+ logP/logD7.4 as tasks | 0.95 [0.93,0.97] | 0.38 [0.32,0.44] | 0.97 [0.96,0.98] |
| 10 | 6 + S+ logP/logD7.4 as descriptors | 0.95 [0.92,0.97] | 0.39 [0.34,0.44] | 0.97 [0.96,0.98] |
| 11 | 1, ensemble of 10 | 0.94 [0.89,0.96] | 0.44 [0.35,0.55] | 0.96 [0.94,0.98] |
| 12 | 9, ensemble of 10 | 0.95 [0.92,0.97] | 0.39 [0.33,0.46] | 0.97 [0.96,0.98] |
The ordinal model numbers in the left-most column indicate the sequence in which the models were developed: for example model 6 (5 + AZ_logD7.4) means that the settings/data of model 5 were used and the AZ_logD7.4 data were added. The 95% confidence interval for the different performance metrics is shown between square brackets
Fig. 1Scatter plot of the performance of the final model (Experimental log P versus Predicted logP) on the test set. On the top a distribution histogram of the predictions is shown and on the right a distribution histogram of the experimental values. The shaded area (very close to the identity line) represents the 95% confidence interval for the regression estimate
Fig. 2Scatter plot of the performance of the final model (Experimental log P versus Predicted logP) on the SAMPL7 molecules. The compounds discussed in the text and shown in Table 3 are labeled. On the top a distribution histogram of the predictions is shown and on the right a distribution histogram of the experimental values. The shaded area represents the 95% confidence interval for the regression estimate
Overview of the performance of the final multitask ensemble model (12_Full), used for the challenge, the singletask ensemble model (11_Full), and several commercial logP prediction tools on the SAMPL7, SAMPL6 and Martel et al. data sets [4]
| Method | Dataset | R2 | RMSE | Spearman ρ |
|---|---|---|---|---|
| AlogP | SAMPL7 | − 0.30 [− 1.78,0.34] | 0.82 [0.59,1.01] | 0.42 [− 0.09,0.73] |
| XlogP3 | SAMPL7 | 0.01 [− 1.12,0.46] | 0.72 [0.55,0.87] | 0.52 [0.07,0.78] |
| S+ logP | SAMPL7 | 0.06 [− 1.23,0.64] | 0.70 [0.41,0.93] | 0.62 [0.19,0.87] |
| Model 11_Full | SAMPL7 | − 0.17 [− 1.49,0.38] | 0.78 [0.52,1.01] | 0.60 [0.13,0.86] |
| Model 12_Full | SAMPL7 | 0.17 [− 0.95,0.65] | 0.66 [0.40,0.89] | 0.63 [0.20,0.91] |
| AlogP | SAMPL6 | 0.56 [− 0.73,0.84] | 0.44 [0.25,0.62] | 0.83 [0.32,0.97] |
| XlogP3 | SAMPL6 | 0.54 [− 0.69,0.82] | 0.45 [0.29,0.58] | 0.71 [0.05,0.94] |
| S+ logP | SAMPL6 | 0.42 [− 1.17,0.80] | 0.51 [0.32,0.65] | 0.71 [0.03,0.94] |
| Model 11_Full | SAMPL6 | 0.71 [− 0.25,0.90] | 0.36 [0.24,0.46] | 0.85 [0.40,0.99] |
| Model 12_Full | SAMPL6 | 0.75 [− 0.08,0.93] | 0.34 [0.17,0.46] | 0.82 [0.30,0.99] |
| AlogP | Martel et al. | − 0.15 [− 0.34,− 0.00] | 1.27 [1.19,1.34] | 0.73 [0.69,0.76] |
| XlogP3 | Martel et al. | 0.04 [− 0.11,0.16] | 1.16 [1.10,1.21] | 0.78 [0.75,0.81] |
| S+ logP | Martel et al. | − 0.26 [− 0.45,− 0.10] | 1.33 [1.26,1.39] | 0.71 [0.67,0.75] |
| Model 11_Full | Martel et al. | − 0.33 [− 0.51,− 0.18] | 1.36 [1.31,1.41] | 0.74 [0.70,0.77] |
| Model 12_Full | Martel et al. | − 0.00 [− 0.14,0.12] | 1.18 [1.13,1.23] | 0.76 [0.73,0.80] |
The 95% confidence interval for the different performance metrics is shown between square brackets.
The top three compounds in terms of largest error (SM43, SM42 and SM36) and lowest error (SM26, SM37 and SM28) for model 12_Full
| Structure | ID | Experimental | Model _Full | TFE MLR | COSMO-RS |
|---|---|---|---|---|---|
|
| SM43 | 0.85 ± 0.01 | 2.51 ± 0.10 | 0.38 | 2.59 |
|
| SM42 | 1.76 ± 0.03 | 3.16 ± 0.05 | 1.57 | 3.48 |
|
| SM36 | 0.76 ± 0.05 | 2.05 ± 0.10 | 2.63 | 2.29 |
|
| SM37 | 1.45 ± 0.10 | 1.36 ± 0.11 | 1.44 | 1.72 |
|
| SM26 | 1.04 ± 0.01 | 1.11 ± 0.06 | 1.18 | 1.22 |
|
| SM28 | 1.18 ± 0.08 | 1.03 ± 0.06 | 1.87 | 0.65 |
The SEMs for both the experimental data and the predictions by model 12_Full are given behind the ± sign. Results from two other methods (one statistical, one physical) that participated in the challenge, TFE MLR and COSMO-RS, are shown as a reference [34, 35]