| Literature DB >> 34384203 |
Georgios Feretzakis1,2,3, Aikaterini Sakagianni4, Evangelos Loupelis2, Dimitris Kalles1, Nikoletta Skarmoutsou5, Maria Martsoukou5, Constantinos Christopoulos6, Malvina Lada6, Stavroula Petropoulou2, Aikaterini Velentza5, Sophia Michelidou4, Rea Chatzikyriakou7, Evangelos Dimitrellos6.
Abstract
OBJECTIVE: In the era of increasing antimicrobial resistance, the need for early identification and prompt treatment of multi-drug-resistant infections is crucial for achieving favorable outcomes in critically ill patients. As traditional microbiological susceptibility testing requires at least 24 hours, automated machine learning (AutoML) techniques could be used as clinical decision support tools to predict antimicrobial resistance and select appropriate empirical antibiotic treatment.Entities:
Keywords: Anti-Bacterial Agents; Artificial Intelligence; Drug Resistance; Machine Learning; Supervised Machine Learning
Year: 2021 PMID: 34384203 PMCID: PMC8369050 DOI: 10.4258/hir.2021.27.3.214
Source DB: PubMed Journal: Healthc Inform Res ISSN: 2093-3681
Summary statistics of the dataset
| Proportion (%) | |
|---|---|
| Age[ | 78.65 ± 14.94 |
| 82 (19–101) | |
|
| |
| Sex | |
| Male | 44 |
| Female | 56 |
|
| |
| Gram stain | |
| Positive | 20.13 |
| Negative | 79.87 |
|
| |
| Class | |
| Resistant | 33.32 |
| Sensitive | 66.68 |
|
| |
| Type of samples | |
| Blood | 19.05 |
| Tissue | 16.08 |
| Catheters | 2.30 |
| Sputum | 2.41 |
| Tracheobronchial | 9.86 |
| Urine | 50.30 |
Data are expressed as mean ± standard deviation and median (range).
Figure 1The three-step proposed process. AutoML: automated machine learning, LIS: laboratory information system.
Four indicative metrics in the four top-performing AutoML models (raw dataset)
| Algorithm name | AUCW | APSW | F1W | ACC |
|---|---|---|---|---|
| StackEnsemble | 0.822 | 0.834 | 0.761 | 0.770 |
| VotingEnsemble | 0.821 | 0.834 | 0.755 | 0.767 |
| MaxAbsScaler, LightGBM | 0.819 | 0.831 | 0.756 | 0.766 |
| SparseNormalizer, XGBoostClassifier | 0.812 | 0.826 | 0.749 | 0.760 |
AutoML: automated machine learning, AUCW: area under the curve-weighted, APSW: average precision score-weighted, F1W: F1 score-weighted, ACC: accuracy.
Figure 2Confusion matrix for the stack ensemble technique.
Four indicative metrics of the four top-performing AutoML models (balanced dataset - SMOTE)
| Algorithm name | AUCW | APSW | F1W | ACC |
|---|---|---|---|---|
| StackEnsemble | 0.850 | 0.849 | 0.769 | 0.769 |
| VotingEnsemble | 0.850 | 0.849 | 0.768 | 0.768 |
| SparseNormalizer, XGBoostClassifier | 0.842 | 0.841 | 0.762 | 0.762 |
| SparseNormalizer, LightGBM | 0.837 | 0.835 | 0.756 | 0.756 |
AutoML: automated machine learning, SMOTE: Synthetic minority oversampling technique, AUCW: area under the curve-weighted, APSW: average precision score-weighted, F1W: F1 score-weighted, ACC: accuracy.
Figure 3Confusion matrix for the stack ensemble technique (balanced dataset).
Figure 4Performance metrics of the two stack ensemble models. AUC: area under the curve, APSW: average precision score-weighted, F1W: F1 score-weighted, ACC: accuracy.