| Literature DB >> 35018571 |
Ursula W de Ruijter1,2, Z L Rana Kaplan3, Wichor M Bramer4, Frank Eijkenaar5, Daan Nieboer1, Agnes van der Heide6, Hester F Lingsma1, Willem A Bax2.
Abstract
BACKGROUND: In an effort to improve both quality of care and cost-effectiveness, various care-management programmes have been developed for high-need high-cost (HNHC) patients. Early identification of patients at risk of becoming HNHC (i.e. case finding) is crucial to a programme's success. We aim to systematically identify prediction models predicting future HNHC healthcare use in adults, to describe their predictive performance and to assess their applicability.Entities:
Keywords: health expenditures; managed care programmes; meaningful use; patient care management; prognosis
Mesh:
Year: 2022 PMID: 35018571 PMCID: PMC9130365 DOI: 10.1007/s11606-021-07333-z
Source DB: PubMed Journal: J Gen Intern Med ISSN: 0884-8734 Impact factor: 6.473
Summary of Development, Validation and Extension Studies of Identified Models Predicting High-Need High-Cost Healthcare Use
| Development only with or without internal validation | Development and external validation | External validation only | |
|---|---|---|---|
| Number of models, mean (range per study) | 5 (1–21) | 6 (1–33) | 6 (1–20) |
| Prospective data, no. of studies (%) | 14 (35%) | 3 (20%) | 3 (60%) |
| Study population | |||
| Development cohort, median (IQR) | 18,065 (3255–191,758) | 36,316 (7948–104,500) | – |
| Validation cohort, median (IQR) | 21,431 (3915–164,738) | 14,798 (7237–122,727) | 83,187 (10,504–11,684,427) |
| Population only > 65 years of age, no. of studies (%) | 9 (23%) | 2 (13%) | 0 (0%) |
| Population setting | |||
| General population, no. of studies (%) | 16 (40%) | 9 (60%) | 4 (80%) |
| Medicare or Medicaid, no. of studies (%) | 8 (20%) | 3 (20%) | 0 (0%) |
| Primary care, no. of studies (%) | 8 (20%) | 0 (0%) | 1 (20%) |
| Other*, no. of studies (%) | 8 (20%) | 3 (20%) | 0 (0%) |
| Source of predictor data† | |||
| Insurance claims data, no. of studies (%) | 24 (60%) | 8 (53%) | 4 (80%) |
| Survey data, no. of studies (%) | 20 (50%) | 8 (53%) | 1 (20%) |
| Electronic health records, no. of studies (%) | 12 (30%) | 4 (27%) | 4 (80%) |
| Other‡, no. of studies (%) | 4 (10%) | 3 (20%) | 1 (20%) |
| Outcome | |||
| Prediction timespan, mean (range), months | 13 (1–60) | 16 (6–60) | 22 (12–60) |
| Prediction timespan beyond 12 months, no. of studies (%) | 2 (5%) | 2 (13%) | 1 (20%) |
IQR interquartile range, HNHChigh-need high-cost healthcare use
*For example, hospital inpatient care, Veterans Affairs (VA) healthcare service, caring homes
†As some studies use predictors from more than one data source, totals may add up to > 100%
‡For example, death registries, Veterans Affairs (VA) healthcare service, clinical laboratory database, pharmacy claims data
Summary of Model Characteristics
| Development only with or without internal validation | Development and external validation | External validation | |
|---|---|---|---|
| Regression analysis, no. of studies (%) | 31 (78%) | 13 (87%) | 3 (60%) |
| Artificial intelligence, no. of studies (%) | 9 (23%) | 5 (33%) | 0 (0%) |
| Other†, no. of studies (%) | 4 (10%) | 0 (0%) | 2 (40%) |
| Split sample, no. of studies (%) | 17 (43%) | 1 (7%) | – |
| Bootstrapping or cross-validation, no. of studies (%) | 9 (23%) | 1 (7 %) | – |
| None, no. of studies (%) | 14 (35%) | 13 (87%) | – |
| Explained variance | |||
| | 21 (53%) | 5 (33%) | 2 (40%) |
| Discrimination‡ | 24 (60%) | 10 (67%) | 4 (80%) |
| C-statistic/AUC, no. of studies (%) | 22 (55%) | 10 (67%) | 4 (80%) |
| Discrimination slope, no. of studies (%) | 2 (5%) | 0 (0%) | 0 (0%) |
| Other§, no. of studies (%) | 2 (5%) | 0 (0%) | 1 (20%) |
| Calibration‖ | 9 (23%) | 6 (40%) | 4 (80%) |
| Goodness of fit, no. of studies (%) | 4 (10%) | 3 (20%) | 1 (20%) |
| Calibration plot, no. of studies (%) | 0 (0%) | 3 (20%) | 4 (80%) |
| Other¶, no. of studies (%) | 5 (13%) | 4 (27%) | 1 (20%) |
| Classification | |||
| Sensitivity/specificity, no. of studies (%) | 14 (35%) | 7 (47%) | 1 (20%) |
| Clinical usefulness | 1 (3%) | 1 (7%) | 0 (0%) |
| Net reclassification index, no. of studies (%) | 1 (3%) | 0 (0%) | 0 (0%) |
| Decision curve, no. of studies (%) | 0 (0%) | 1 (7%) | 0 (0%) |
AUC area under the receiver operating curve
*As some studies use multiple modelling strategies when presenting multiple models, totals may add up to > 100%
†For example, risk stratification with predefined risk tiers
‡As some studies use multiple measures of discrimination, totals may add up to > 100%
§For example, D-statistic, Brier score, Integrated Discrimination Improvement (IDI)
‖As some studies use multiple measures of calibration, totals may add up to > 100%
¶For example, calibration slope, root mean square of approximation (RMSEA), cost capture
Figure 1Prediction model Risk Of Bias Assessment Tool (PROBAST) results on risk of bias and concern for applicability in identified models for predicting high-need high-cost healthcare use. (a)Risk of bias—assessment whether shortcomings in study design, conduct, or analysis could lead to systematically distorted estimates of a model’s predictive performance (b) Concern for applicability—assessment whether the population, predictors, or outcomes of the primary study differ from those specified in the review question.
Figure 2Scatter plot of model performance, indicated by models’ discriminative ability to distinguish those with from those without the outcome (expressed as C-statistic;X-axis), vs. models’ expected performance in new patients, indicated by risk of overfitting to the development data (expressed as natural log of EPV, Y-axis) and risk of bias (ROB). a-d (a) X-axis–C-statistic. (b) Y-axis—natural log of number of events per variable. (c) Horizontal blue line—natural log of EPV 20 (3.0). An EPV ≥ 20 implies minimal risk of overfitting.[26,27] (d) Vertical blue line—C-statistic 0.7. A C-statistic ≥ 0.7 implies good discrimination.[23–25] (e) Risk of bias as assessed through Prediction model Risk Of Bias Assessment Tool (PROBAST).[28] U = outcome based on utilization; C = outcome based on cost; EPV = events per variable; ROB = risk of bias; ED = emergency department.