| Literature DB >> 35599364 |
Marc Labriffe1,2, Jean-Baptiste Woillard1,2, Jean Debord1,2, Pierre Marquet1,2.
Abstract
Everolimus is an immunosuppressant with a small therapeutic index and large between-patient variability. The area under the concentration versus time curve (AUC) is the best marker of exposure but measuring it requires collecting many blood samples. The objective of this study was to train machine learning (ML) algorithms using pharmacokinetic (PK) profiles from kidney transplant recipients, simulated profiles, or both types, and compare their performance for everolimus AUC0-12h estimation using a limited number of predictors, as compared to an independent set of full PK profiles from patients, as well as to the corresponding maximum a posteriori Bayesian estimates (MAP-BE). XGBoost was first trained on 508 patient interdose AUCs estimated using MAP-BE, and then on 500-10,000 rich interdose PK profiles simulated using previously published population PK parameters. The predictors used were: predose, ~1 h, and ~2 h whole blood concentrations, differences between these concentrations, relative deviations from theoretical sampling times, morning dose, patient age, and time elapsed since transplantation. The best results were obtained with XGBoost trained on 5016 simulated profiles. AUC estimation achieved in an external dataset of 114 full-PK profiles was excellent (root mean squared error [RMSE] = 10.8 μg*h/L) and slightly better than MAP-BE (RMSE = 11.9 μg*h/L). Using more profiles (n = 10,035) did not improve the ML algorithm performance. The contribution of mixing patient and simulated profiles was significant only when they were in balanced numbers, with ~500 for each (RMSE = 12.5 μg*h/L), compared with patient data alone (RMSE = 18.0 μg*h/L).Entities:
Mesh:
Substances:
Year: 2022 PMID: 35599364 PMCID: PMC9381914 DOI: 10.1002/psp4.12810
Source DB: PubMed Journal: CPT Pharmacometrics Syst Pharmacol ISSN: 2163-8306
Characteristics of the features used for the training and validation of the first XGBoost algorithm based on 508 patient pharmacokinetic profiles
| Train set ( | Test set ( | External validation set ( | |
|---|---|---|---|
| Time between transplantation and tacrolimus blood concentrations, months | 3.95 [1.97, 11.84] | 3.95 [1.97, 11.84] | 14.76 [4.97, 105.90] |
| AUC0‐12h, μg*h/L | 102 [74, 142] | 101 [73, 145] | 96 [69, 125] |
| Patient age, year | 47 [35, 57] | 47 [39, 57] | 50 [40, 59] |
| Morning dose, mg | 1.50 [0.75, 1.50] | 1.50 [0.75, 1.50] | 1.00 [0.56, 2.00] |
| Trough level (C0), μg/L | 5.4 [3.7, 7.9] | 5.5 [3.5, 8.6] | 5.4 [3.9, 7.4] |
| Concentration at 1 h; C1, μg/L | 14.2 [9.2, 20.5] | 13.3 [8.7, 19.9] | 15.2 [10.2, 21.0] |
| Concentration at 2 h; C2, μg/L | 13.2 [9.0, 18.4] | 12.9 [9.2, 18.3] | 11.5 [8.4, 15.6] |
| Deviation from the 1‐h theoretical time, % | 0 [0, 0] | 0 [0, 0] | 0 [0, 0] |
| Deviation from the 2‐h theoretical time, % | 0 [0, 4] | 0 [0, 2] | 0 [0, 0] |
| Concentration difference between C1 and C0 | 8.5 [4.2, 13.1] | 7.2 [3.3, 11.9] | 9.5 [5.6, 14.1] |
| Concentration difference between C1 and C2 | 1.5 [−1.3, 4.6] | 0.3 [−2.1, 3.6] | 3.6 [0.6, 6.4] |
| Concentration difference between C2 and C0 | 7.1 [4.7, 10.8] | 7.3 [4.3, 10.0] | 5.8 [3.9, 9.1] |
| Reference AUCs: number of samples | 3 | 3 | 10–12 |
| Reference AUCs: method used | Same MAP‐BE | Trapezoidal rule | |
Note: Medians [interquartile ranges] are presented here.
Abbreviations: AUC, area under the curve; ISBA, Immunosuppressant Bayesian Dose Adjustment; MAP‐BE, maximum a posteriori Bayesian estimation; XGBoost, extreme gradient boosting, an optimized gradient boosting machine learning method.
Performance of the XGBoost algorithms at estimating everolimus AUC0‐12h in the different sorts of training, testing, and external validation datasets
| Train set ( | Test set ( | External validation set ( | |||
|---|---|---|---|---|---|
| XGBoost | XGBoost | XGBoost ( | MAP‐BE ( | ||
| 508 patient PK profiles | RMSE, μg*h/L | 15.2 | 15.4 | 18.0 | 11.9 |
| Normalized RMSE (%) | 13.5 | 13.8 | 17.2 | 11.2 | |
|
| 0.921 | 0.922 | 0.873 | 0.952 | |
| Relative MPE (%) | 1.9 | 4.5 | 4.5 | 3.0 | |
| Number of MPE out of the ±20% interval | 41 (10.8%) | 20 (15.7%) | 17 (14.9%) | 7 (7.4%) | |
| 500 simulated + 508 patient PK profiles | RMSE, μg*h/L | 32.1 | 23.3 | 12.5 | 11.9 |
| Normalized RMSE (%) | 24.9 | 17.2 | 11.9 | 11.2 | |
|
| 0.880 | 0.942 | 0.939 | 0.952 | |
| Relative MPE (%) | −0.4 | 0.8 | 0.0 | 3.0 | |
| Number of MPE out of the ±20% interval n | 50 (6.6%) | 26 (10.3%) | 5 (4.4%) | 7 (7.4%) | |
| 1003 simulated PK profiles | RMSE, μg*h/L | 18.5 | 19.0 | 18.6 | 11.9 |
| Normalized RMSE (%) | 12.6 | 12.8 | 17.8 | 11.2 | |
|
| 0.970 | 0.970 | 0.919 | 0.952 | |
| Relative MPE (%) | 1.7 | 1.6 | 9.4 | 3.0 | |
| Number of MPE out of the ±20% interval | 39 (5.2%) | 13 (5.2%) | 22 (19.3%) | 7 (7.4%) | |
| 1003 simulated + 508 patient PK profiles | RMSE, μg*h/L | 19.2 | 10.7 | 14.1 | 11.9 |
| Normalized RMSE (%) | 13.6 | 7.8 | 13.4 | 11.2 | |
|
| 0.967 | 0.986 | 0.924 | 0.952 | |
| Relative MPE (%) | 0.5 | 0.0 | 1.2 | 3.0 | |
| Number of MPE out of the ±20% interval | 50 (4.4%) | 17 (4.5%) | 8 (7.0%) | 7 (7.4%) | |
| 2508 simulated PK profiles | RMSE, μg*h/L | 13.4 | 15.1 | 11.4 | 11.9 |
| Normalized RMSE (%) | 8.6 | 10.2 | 10.9 | 11.2 | |
|
| 0.987 | 0.982 | 0.951 | 0.952 | |
| Relative MPE (%) | 0.1 | 0.1 | 1.4 | 3.0 | |
| Number of MPE out ofthe ±20% interval | 8 (0.4%) | 4 (0.6%) | 8 (7.0%) | 7 (7.4%) | |
| 2508 simulated + 508 patient PK profiles | RMSE, μg*h/L | 12.5 | 14.7 | 12.2 | 11.9 |
| Normalized RMSE (%) | 8.5 | 10.2 | 11.7 | 11.2 | |
|
| 0.987 | 0.981 | 0.942 | 0.952 | |
| Relative MPE (%) | 0.3 | 0.2 | 2.2 | 3.0 | |
| Number of MPE out of the ±20% interval | 39 (1.7%) | 17 (2.3%) | 7 (6.1%) | 7 (7.4%) | |
| 5016 simulated PK profiles | RMSE, μg*h/L | 14.1 | 11.2 | 10.8 | 11.9 |
| Normalized RMSE (%) | 9.3 | 7.3 | 10.3 | 11.2 | |
|
| 0.985 | 0.990 | 0.956 | 0.952 | |
| Relative MPE (%) | 0.1 | 0.1 | 1.6 | 3.0 | |
| Number of MPE out of the ±20% interval | 9 (0.2%) | 2 (0.2%) | 7 (6.1%) | 7 (7.4%) | |
| 5016 simulated + 508 patient PK profiles | RMSE, μg*h/L | 11.1 | 9.2 | 12.7 | 11.9 |
| Normalized RMSE (%) | 7.4 | 6.4 | 12.1 | 11.2 | |
|
| 0.990 | 0.992 | 0.939 | 0.952 | |
| Relative MPE (%) | 0.2 | 0.3 | 2.7 | 3.0 | |
| Number of MPE out of the ±20% interval | 44 (1.1%) | 16 (1.2%) | 6 (5.3%) | 7 (7.4%) | |
| 10,035 simulated PK profiles | RMSE, μg*h/L | 7.6 | 6.7 | 12.6 | 11.9 |
| Normalized RMSE (%) | 5.0 | 4.3 | 12.1 | 11.2 | |
|
| 0.996 | 0.997 | 0.942 | 0.952 | |
| Relative MPE (%) | 0.0 | 0.0 | −1.2 | 3.0 | |
| Number of MPE out of the ±20% interval | 3 (0.0%) | 0 (0.0%) | 7 (6.1%) | 7 (7.4%) | |
| 10,035 simulated + 508 patient PK profiles | RMSE, μg*h/L | 7.6 | 7.5 | 13.7 | 11.9 |
| Normalized RMSE (%) | 5.0 | 4.9 | 13.1 | 11.2 | |
|
| 0.996 | 0.996 | 0.929 | 0.952 | |
| Relative MPE (%) | 0.1 | 0.1 | 2.6 | 3.0 | |
| Number of MPE out of the ±20% interval | 41 (0.5%) | 13 (0.5%) | 9 (7.9%) | 7 (7.4%) | |
Note: The performance of the MAP‐BE actually used in the online ISBA expert system is displayed here, in the last column of the table, for comparison purposes.
Abbreviations: AUC, area under the curve; ISBA, Immunosuppressant Bayesian Dose Adjustment; MAP‐BE, maximum a posteriori Bayesian estimation; MPE, mean prediction error; Normalized RMSE, root mean square error divided by the mean of reference AUCs; PK, pharmacokinetic; RMSE, root mean square error; XGBoost, extreme gradient boosting, an optimized gradient boosting machine learning method.
For 20 profiles, MAP‐BE could not be used because the morning dose was missing.
FIGURE 1Plot of everolimus AUC0‐12h prediction RMSE in the training (blue) and the external validation (orange) datasets, according to the number of simulations used to train the XGBoost algorithm. Points represent the performance of each XGBoost model, lines are a smoothed representation of trends. AUC0–12h, 0–12‐h area under the concentration‐time curve; RMSE, root mean square error; XGBoost, extreme gradient boosting, an optimized gradient boosting machine learning method
FIGURE 2Scatter plots and residual plots of machine learning predicted versus reference everolimus AUC0‐12h in the external validation dataset. The thin black line represents y = x. The colored lines were obtained by linear regression for each version of the XGBoost algorithm: in green, the model trained on patient data only (n = 508); in blue, the model trained on a balanced mix of patient (n = 508) and simulated (n = 500) data; in purple the best model, based on simulated data only (n = 5016); in red, the MAP‐BE currently available through our online expert system ISBA. AUC, area under the curve; ISBA, Immunosuppressant Bayesian Dose Adjustment; MAP‐BE, maximum a posteriori Bayesian estimation; XGBoost, extreme gradient boosting, an optimized gradient boosting machine learning method