| Literature DB >> 36088288 |
Xiran Peng1,2, Tao Zhu1,2, Tong Wang3,4, Fengjun Wang3,4, Ke Li5,6, Xuechao Hao7,8.
Abstract
BACKGROUND: Postoperative major adverse cardiovascular events (MACEs) account for more than one-third of perioperative deaths. Geriatric patients are more vulnerable to postoperative MACEs than younger patients. Identifying high-risk patients in advance can help with clinical decision making and improve prognosis. This study aimed to develop a machine learning model for the preoperative prediction of postoperative MACEs in geriatric patients.Entities:
Keywords: Electronic health records; Geriatric assessment; Machine learning; Postoperative major adverse cardiovascular events; Risk assessment
Mesh:
Year: 2022 PMID: 36088288 PMCID: PMC9463850 DOI: 10.1186/s12871-022-01827-x
Source DB: PubMed Journal: BMC Anesthesiol ISSN: 1471-2253 Impact factor: 2.376
Preoperative assessment rule of postoperative MACEs
| Risk factor | point |
|---|---|
| History of ischaemic heart disease | 1 |
| History of congestive heart failure | 1 |
| History of cerebrovascular disease | 1 |
| Preoperative serum creatinine ≥ 177μMol/L | 1 |
| High risk surgerya | 1 |
| Insulin dependent diabetes mellitus | 1 |
| 300 ng/L < BNP ≤ 6000 ng/L | 1 |
| 6000 ng/L < BNP | 2 |
In our hospital, anesthetists used this rule to estimate patient’s risk of postoperative MACEs during preoperative interview. Patients were divided into different risk bands according to following judgement criterion: Low risk: total point = 0; intermediate risk: 0 < total point < 3; high risk: total point ≥ 3. Abbreviations: MACEs Major adverse cardiovascular events, BNP B-type natriuretic peptide
aMajor vascular surgery, cardiac surgery
Fig. 1Performance characteristic curves of candidate models. (a) Receiver operating curves of each candidate model. (b) Precision-recall curves of each candidate model. This figure shows performance characteristic curves of candidate models trained by extreme gradient boosting, gradient boosting machine, random forest, support vector machine, and Elastic Net logistic regression
Performance metrics of candidate models
| Model | AUROC(95% CI) | AUPRC(95% CI) | Brier score(95% CI) |
|---|---|---|---|
| Extreme Gradient Boosting | 0.870(0.786–0.938) | 0.404(0.219–0.589) | 0.024(0.016–0.032) |
| Gradient Boosting Machine | 0.862(0.781–0.928) | 0.287(0.133–0.431) | 0.030(0.024–0.037) |
| Random forest | 0.888(0.804–0.951) | 0.305(0.151–0.481) | 0.065(0.060–0.072) |
| Support vector machine | 0.856(0.769–0.929) | 0.247(0.111–0.414) | 0.024(0.016–0.032) |
| Elastic Net logistic regression | 0.857(0.775–0.925) | 0.298(0.139–0.482) | 0.105(0.079–0.139) |
Performance metrics of models trained by extreme Gradient Boosting, Gradient Boosting Machine, random forest, support vector machine, and Elastic Net logistic regression. Abbreviations: AUROC Area under the receiver operating characteristic curve, CI Confidence interval, AUPRC Area under the precision-recall curve
Performance of the original model compared with the undersampling model
| Performance metric | Original model | Undersampling model | |
|---|---|---|---|
| AUROC(95% CI) | 0.870(0.786–0.938) | 0.912(0.847–0.962) | < 0.001 |
| AUPRC(95% CI) | 0.404(0.219–0.589) | 0.511(0.344–0.667) | < 0.001 |
| Brier score | 0.024(0.016–0.032) | 0.020(0.013–0.028) | < 0.001 |
Abbreviations: AUROC Area under the receiver operating characteristic curve, CI Confidence interval, AUPRC Area under the precision-recall curve
Performance of the undersampling model compared with the reduced undersampling model
| Performance metric | Undersampling model | Reduced Undersampling model | |
|---|---|---|---|
| AUROC(95% CI) | 0.912(0.847–0.962) | 0.896(0.826–0.953) | < 0.001 |
| AUPRC(95% CI) | 0.511(0.344–0.667) | 0.507(0.338–0.669) | 0.36 |
| Brier score | 0.020(0.013–0.028) | 0.020(0.013–0.028) | 0.20 |
Abbreviations: AUROC Area under the receiver operating characteristic curve, CI Confidence interval, AUPRC Area under the precision-recall curve
Fig. 2Importance matrix plot of the reduced undersampling XGB model. This figure shows the top ten important variables in reduced undersampling XGB model. Abbreviations: XGB: Extreme Gradient Boosting; NYHA: New York Heart Association