| Literature DB >> 36050726 |
Deepika Verma1, Duncan Jansen2, Kerstin Bach3, Mannes Poel2, Paul Jarle Mork4, Wendy Oude Nijeweme d'Hollosy2,5.
Abstract
BACKGROUND: Patient-reported outcome measurements (PROMs) are commonly used in clinical practice to support clinical decision making. However, few studies have investigated machine learning methods for predicting PROMs outcomes and thereby support clinical decision making.Entities:
Keywords: Low-back pain; Machine learning; Neck pain; Outcome Prediction; Patient-reported outcomes; Self-reported measures
Mesh:
Year: 2022 PMID: 36050726 PMCID: PMC9434943 DOI: 10.1186/s12911-022-01973-9
Source DB: PubMed Journal: BMC Med Inform Decis Mak ISSN: 1472-6947 Impact factor: 3.298
Fig. 1Overview of data collection in the selfBACK randomized controlled trial. The different data components are indicated by the orange boxes
Fig. 2Overview of the assessment moments with questionnaires at the pain rehabilitation centre RCR, Enschede, the Netherlands. MBR: Multidisciplinary Biopsychosocial Rehabilitation
The PROMs included in Dataset 2
| Anxiety | Depression | Total score |
| Pain severity | Interference | Life control |
| Affective distress | Solicitous responses | Distracting responses |
| Punishing responses | Support | Household chores |
| Outdoor work | Social activities | General activities |
| Total score | ||
| Avoidance | Cognitive fusion | Total score |
| Physical functioning | Role limitations | Vitality |
| Mental health | ||
Referral combinations the classification algorithms were trained on
| Model | Class A | Class B | # of cases |
|---|---|---|---|
| 1 | Clinic RCR | Polyclinic RCR | 529 |
| 2 | Clinic RCR | Reject | 606 |
| 3 | Polyclinic RCR | Reject | 665 |
| 4 | Polyclinic RMCR | Clinic RCR | 375 |
| 5 | Polyclinic RMCR | Polyclinic RCR | 434 |
| 6 | Polyclinic RMCR | Reject | 511 |
Fig. 3The workflow of the Machine Learning pipeline used in this study
Impurity-based feature selection using Random Forest for predicting (a) and (b)
| Model | MAE ± SD | MR | Model | MAE ± SD | MR | ||
|---|---|---|---|---|---|---|---|
| LR | 1.54 ± 1.18 | 0.25 | 0.050 | LR | 1.16 ± 1.12 | 0.27 | 0.003 |
| PAR | 1.54 ± 1.19 | 0.25 | − 0.087 | PAR | 1.10 ± 1.14 | 0.28 | − 0.288 |
| SGDR | 1.55 ± 1.17 | 0.25 | 0.143 | SGDR | 1.10 ± 1.13 | 0.29 | − 0.243 |
| RFR | 1.57 ± 1.13 | 0.25 | 0.199 | − | |||
| ABR | 1.60 ± 1.14 | 0.23 | 0.0 | ABR | 1.21 ± 1.20 | 0.18 | − 0.090 |
| SVR | 1.11 ± 1.15 | 0.27 | − 0.221 | ||||
| XGB | 1.55 ± 1.13 | 0.26 | − 0.015 | XGB | 1.18 ± 1.12 | 0.25 | 0.016 |
The best performing model are highlighted in bold letters
Results for the Balanced Random Forest (RF) classifier (± standard deviation)
| Train | Test | ||||
|---|---|---|---|---|---|
| MCC | MCC | BAC | SEN | SPE | |
| Model 1: C-P | 0.22 ± 0.02 | 0.14 ± 0.08 | 0.56 ± 0.04 | 0.66 ± 0.13 | 0.47 ± 0.17 |
| Model 2: C-R | 0.26 ± 0.01 | 0.20 ± 0.08 | 0.60 ± 0.04 | 0.73 ± 0.11 | 0.47 ± 0.05 |
| Model 3: P-R | 0.22 ± 0.03 | 0.19 ± 0.06 | 0.60 ± 0.03 | 0.59 ± 0.06 | 0.61 ± 0.02 |
| Model 4: M-C | 0.54 ± 0.01 | 0.46 ± 0.05 | 0.73 ± 0.03 | 0.86 ± 0.13 | 0.59 ± 0.11 |
| Model 5: M-P | 0.42 ± 0.02 | 0.42 ± 0.05 | 0.70 ± 0.03 | 0.99 ± 0.03 | 0.42 ± 0.07 |
| Model 6: M-R | 0.53 ± 0.01 | 0.49 ± 0.06 | 0.77 ± 0.03 | 0.98 ± 0.04 | 0.57 ± 0.05 |
Results for the Random Under Sampling Boosting (RUSBoost) classifier ± standard deviation
| Train | Test | ||||
|---|---|---|---|---|---|
| MCC | MCC | BAC | SEN | SPE | |
| Model 1: C-P | 0.22 ± 0.02 | 0.11 ± 0.07 | 0.55 ± 0.03 | 0.72 ± 0.10 | 0.39 ± 0.13 |
| Model 2: C-R | 0.24 ± 0.01 | 0.21 ± 0.08 | 0.60 ± 0.04 | 0.59 ± 0.16 | 0.61 ± 0.10 |
| Model 3: P-R | 0.20 ± 0.02 | 0.19 ± 0.06 | 0.60 ± 0.03 | 0.59 ± 0.06 | 0.61 ± 0.02 |
| Model 4: M-C | 0.55 ± 0.02 | 0.49 ± 0.10 | 0.74 ± 0.05 | 0.94 ± 0.13 | 0.54 ± 0.10 |
| Model 5: M-P | 0.43 ± 0.01 | 0.43 ± 0.05 | 0.71 ± 0.03 | 1.00 ± 0.00 | 0.42 ± 0.07 |
| Model 6: M-R | 0.52 ± 0.01 | 0.50 ± 0.05 | 0.78 ± 0.03 | 0.98 ± 0.03 | 0.57 ± 0.06 |