| Literature DB >> 35934199 |
Johanna A A Damen1, Karel G M Moons2, Maarten van Smeden3, Lotty Hooft2.
Abstract
BACKGROUND: Prognostic models are typically developed to estimate the risk that an individual in a particular health state will develop a particular health outcome, to support (shared) decision making. Systematic reviews of prognostic model studies can help identify prognostic models that need to further be validated or are ready to be implemented in healthcare.Entities:
Keywords: Meta-analysis; Prediction model; Prognosis; Prognostic model; Systematic review
Year: 2022 PMID: 35934199 PMCID: PMC9351211 DOI: 10.1016/j.cmi.2022.07.019
Source DB: PubMed Journal: Clin Microbiol Infect ISSN: 1198-743X Impact factor: 13.310
Glossary
| Definition | |
|---|---|
| Calibration | Agreement between observed outcome risks and the risks predicted by the model. |
| Calibration slope | Slope of the linear predictor in case you would fit a regression line. The calibration slope ideally equals 1. A calibration slope <1 indicates that predictions are too extreme (e.g. low-risk individuals have a predicted risk that is too low, and high-risk individuals are given a predicted risk that is too high). Conversely, a slope >1 indicates that predictions are not extreme enough [ |
| Concordance c-statistic | Statistic that quantifies the chance that for any two individuals of which one developed the outcome and the other did not, the former has a higher predicted risk according to the model than the latter. A c-statistic of 1 means perfect discriminative ability, whereas a model with a c-statistic of 0.5 is not better than flipping a coin [ |
| Discrimination | Ability of the model to distinguish between people who did and did not develop the outcome of interest, often quantified by the concordance c-statistic. |
| External validation | Evaluating the predictive performance of a prediction model in a study population other than the population from which the model was developed. |
| OE ratio | The ratio of the total number of actual observed participants with the outcome in a specific time frame (e.g. in 1 y) and the total number of participants with the outcome as predicted by the model. |
| Prediction horizon | Time frame over which the model predicts the outcome (e.g. predicting 10-y risk of developing cardiovascular disease). |
| Predictive performance | Accuracy of the predictions made by a prediction model, often expressed in terms of calibration and discrimination. |
OE, observed expected.
Fig. 1Review steps. References: the Critical Appraisal and Data Extraction for Systematic Reviews of Prediction Modelling Studies [23], Population, Index model, Comparartor model, Outcome(s) model, Timing, Setting [30], search filters [31,32], Prediction model Risk Of Bias ASsessment tool [24,33], meta-analysis [30,34], the Preferred Items for Systematic Reviews and Meta-analyses guidelines [35], Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis statement [36,37].
PICOTS system
| PICOTS | Explanation |
|---|---|
| Target population in which the prognostic model will be used. | |
| Prognostic model(s) under review. | |
| If applicable (depending on the review question), competing prognostic models for the index model. | |
| Outcome(s) of interest that is (are) predicted by the prognostic model(s). | |
| 1. Moment in time the prognostic model is to be used | |
| Intended setting or context of the prognostic model(s) under review. |
PICOTS, Population, Index model, Comparator model, Outcome(s), Timing, Setting.
Fig. 2Risk of bias as assessed using the Prediction model Risk Of Bias Assessment tool. The figure represents the percentage of studies scoring a low (green), high (red), or unclear (orange) risk of bias for each of the four Prediction model Risk Of Bias ASsessment domains and the overall risk of bias.
Fig. 3Forest plots of the Observed Expected ratio and c-statistic of the Pooled Cohort Equations for predicting 10-year risk of cardiovascular disease in women in the general population. ∗Performance of the model in the development study after internal validation. The first row contains the performance of the White model, the second the African American model (not included in the pooled estimate of performance).