OBJECTIVE: Ideally, clinical prediction models are generalizable to other patient groups. Unfortunately, they perform regularly worse when validated in new patients and are then often redeveloped. While the original prediction model usually has been developed on a large data set, redevelopment then often occurs on the smaller validation set. Recently, methods to update existing prediction models with the data of new patients have been proposed. We used an existing model that preoperatively predicts the risk of severe postoperative pain (SPP) to compare five updating methods. STUDY DESIGN AND SETTING: The model was tested and updated with a set of 752 new patients (274 [36] with SPP). We studied the discrimination (ability to distinguish between patients with and without SPP) and calibration (agreement between the predicted risks and observed frequencies of SPP) of the five updated models in 283 other patients (100 [35%] with SPP). RESULTS: Simple recalibration methods improved the calibration to a similar extent as revision methods that made more extensive adjustments to the original model. Discrimination could not be improved by any of the methods. CONCLUSION: When the performance is poor in new patients, updating methods can be applied to adjust the model, rather than to develop a new model.
OBJECTIVE: Ideally, clinical prediction models are generalizable to other patient groups. Unfortunately, they perform regularly worse when validated in new patients and are then often redeveloped. While the original prediction model usually has been developed on a large data set, redevelopment then often occurs on the smaller validation set. Recently, methods to update existing prediction models with the data of new patients have been proposed. We used an existing model that preoperatively predicts the risk of severe postoperative pain (SPP) to compare five updating methods. STUDY DESIGN AND SETTING: The model was tested and updated with a set of 752 new patients (274 [36] with SPP). We studied the discrimination (ability to distinguish between patients with and without SPP) and calibration (agreement between the predicted risks and observed frequencies of SPP) of the five updated models in 283 other patients (100 [35%] with SPP). RESULTS: Simple recalibration methods improved the calibration to a similar extent as revision methods that made more extensive adjustments to the original model. Discrimination could not be improved by any of the methods. CONCLUSION: When the performance is poor in new patients, updating methods can be applied to adjust the model, rather than to develop a new model.
Authors: Rodrigo Octavio Deliberato; Ary Serpa Neto; Matthieu Komorowski; David J Stone; Stephanie Q Ko; Lucas Bulgarelli; Carolina Rodrigues Ponzoni; Renato Carneiro de Freitas Chaves; Leo Anthony Celi; Alistair E W Johnson Journal: Crit Care Med Date: 2019-02 Impact factor: 7.598
Authors: Sharon E Davis; Robert A Greevy; Christopher Fonnesbeck; Thomas A Lasko; Colin G Walsh; Michael E Matheny Journal: J Am Med Inform Assoc Date: 2019-12-01 Impact factor: 4.497
Authors: Jejo D Koola; Sam B Ho; Aize Cao; Guanhua Chen; Amy M Perkins; Sharon E Davis; Michael E Matheny Journal: Dig Dis Sci Date: 2019-09-17 Impact factor: 3.199
Authors: Ali Abbasi; Eva Corpeleijn; Linda M Peelen; Ron T Gansevoort; Paul E de Jong; Rijk O B Gans; Wolfgang Rathmann; Bernd Kowall; Christine Meisinger; Hans L Hillege; Ronald P Stolk; Gerjan Navis; Joline W J Beulens; Stephan J L Bakker Journal: Eur J Epidemiol Date: 2012-01-04 Impact factor: 8.082