| Literature DB >> 25411322 |
Peter C Austin1,2,3, Ewout W Steyerberg4.
Abstract
We conducted an extensive set of empirical analyses to examine the effect of the number of events per variable (EPV) on the relative performance of three different methods for assessing the predictive accuracy of a logistic regression model: apparent performance in the analysis sample, split-sample validation, and optimism correction using bootstrap methods. Using a single dataset of patients hospitalized with heart failure, we compared the estimates of discriminatory performance from these methods to those for a very large independent validation sample arising from the same population. As anticipated, the apparent performance was optimistically biased, with the degree of optimism diminishing as the number of events per variable increased. Differences between the bootstrap-corrected approach and the use of an independent validation sample were minimal once the number of events per variable was at least 20. Split-sample assessment resulted in too pessimistic and highly uncertain estimates of model performance. Apparent performance estimates had lower mean squared error compared to split-sample estimates, but the lowest mean squared error was obtained by bootstrap-corrected optimism estimates. For bias, variance, and mean squared error of the performance estimates, the penalty incurred by using split-sample validation was equivalent to reducing the sample size by a proportion equivalent to the proportion of the sample that was withheld for model validation. In conclusion, split-sample validation is inefficient and apparent performance is too optimistic for internal validation of regression-based prediction models. Modern validation methods, such as bootstrap-based optimism correction, are preferable. While these findings may be unsurprising to many statisticians, the results of the current study reinforce what should be considered good statistical practice in the development and validation of clinical prediction models.Entities:
Keywords: bootstrap; c-statistic; clinical prediction models; data splitting; discrimination; logistic regression; model validation; receiver operating characteristic curve
Mesh:
Year: 2014 PMID: 25411322 PMCID: PMC5394463 DOI: 10.1177/0962280214558972
Source DB: PubMed Journal: Stat Methods Med Res ISSN: 0962-2802 Impact factor: 3.021
Distribution of risk factors in the clinical prediction model and odds ratios for 1-year mortality.
| Variable | Distribution (median/ (1st, 25th, 75th, and 99th percentiles) or %)[ | Odds ratio for 1-year mortality (95% confidence interval) |
|---|---|---|
| Age | 78 (44, 70, 84, 97) | 1.041 (1.037–1.045) |
| Respiratory rate (breaths per minute) | 24 (20, 20, 28, 45) | 1.025 (1.019–1.031) |
| Systolic blood pressure (beats per minute) | 145 (90, 124, 169, 200) | 0.987 (0.985–0.988) |
| Urea nitrogen | 8.4 (2.9, 6.1, 12.2, 20.0) | 1.104 (1.096–1.113) |
| Sodium concentration <136 mEq/l | 21.3% | 1.365 (1.253–1.487) |
| Haemoglobin <10.0 g/dl | 12.7% | 1.196 (1.076–1.329) |
| Cerebrovascular disease | 17.4% | 1.323 (1.207–1.450) |
| Dementia | 9.0% | 2.132 (1.892–2.402) |
| Chronic obstructive pulmonary disease | 25.0% | 1.297 (1.194–1.408) |
| Cancer | 11.6% | 1.663 (1.495–1.849) |
Continuous variables are reported as medians (1st, 25th, 75th, and 99th percentiles). Dichotomous variables are reported as the percentage of subjects with the condition.
Figure 1.Mean estimated c−statistic for different validation methods.
Figure 2.Standard deviation of estimated c−statistic for different validation methods.
Figure 3.Mean squared error (MSE) of different estimation methods.