Noam Barda1,2,3, Gal Yona4, Guy N Rothblum4, Philip Greenland5, Morton Leibowitz1, Ran Balicer1,2, Eitan Bachmat6, Noa Dagan1,3,6. 1. Clalit Research Institute, Clalit Health Services, Tel-Aviv, Israel. 2. School of Public Health, Ben-Gurion University, Beer-Sheba, Israel. 3. Department of Biomedical Informatics, Harvard Medical School, Boston, Massachusetts, USA. 4. Department of Computer Science and Applied Mathematics, Weizmann Institute of Science, Rehovot, Israel. 5. Feinberg School of Medicine, Northwestern University, Chicago, Illinois, USA. 6. Department of Computer Science, Ben-Gurion University, Beer-Sheba, Israel.
Abstract
OBJECTIVE: To illustrate the problem of subpopulation miscalibration, to adapt an algorithm for recalibration of the predictions, and to validate its performance. MATERIALS AND METHODS: In this retrospective cohort study, we evaluated the calibration of predictions based on the Pooled Cohort Equations (PCE) and the fracture risk assessment tool (FRAX) in the overall population and in subpopulations defined by the intersection of age, sex, ethnicity, socioeconomic status, and immigration history. We next applied the recalibration algorithm and assessed the change in calibration metrics, including calibration-in-the-large. RESULTS: 1 021 041 patients were included in the PCE population, and 1 116 324 patients were included in the FRAX population. Baseline overall model calibration of the 2 tested models was good, but calibration in a substantial portion of the subpopulations was poor. After applying the algorithm, subpopulation calibration statistics were greatly improved, with the variance of the calibration-in-the-large values across all subpopulations reduced by 98.8% and 94.3% in the PCE and FRAX models, respectively. DISCUSSION: Prediction models in medicine are increasingly common. Calibration, the agreement between predicted and observed risks, is commonly poor for subpopulations that were underrepresented in the development set of the models, resulting in bias and reduced performance for these subpopulations. In this work, we empirically evaluated an adapted version of the fairness algorithm designed by Hebert-Johnson et al. (2017) and demonstrated its use in improving subpopulation miscalibration. CONCLUSION: A postprocessing and model-independent fairness algorithm for recalibration of predictive models greatly decreases the bias of subpopulation miscalibration and thus increases fairness and equality.
OBJECTIVE: To illustrate the problem of subpopulation miscalibration, to adapt an algorithm for recalibration of the predictions, and to validate its performance. MATERIALS AND METHODS: In this retrospective cohort study, we evaluated the calibration of predictions based on the Pooled Cohort Equations (PCE) and the fracture risk assessment tool (FRAX) in the overall population and in subpopulations defined by the intersection of age, sex, ethnicity, socioeconomic status, and immigration history. We next applied the recalibration algorithm and assessed the change in calibration metrics, including calibration-in-the-large. RESULTS: 1 021 041 patients were included in the PCE population, and 1 116 324 patients were included in the FRAX population. Baseline overall model calibration of the 2 tested models was good, but calibration in a substantial portion of the subpopulations was poor. After applying the algorithm, subpopulation calibration statistics were greatly improved, with the variance of the calibration-in-the-large values across all subpopulations reduced by 98.8% and 94.3% in the PCE and FRAX models, respectively. DISCUSSION: Prediction models in medicine are increasingly common. Calibration, the agreement between predicted and observed risks, is commonly poor for subpopulations that were underrepresented in the development set of the models, resulting in bias and reduced performance for these subpopulations. In this work, we empirically evaluated an adapted version of the fairness algorithm designed by Hebert-Johnson et al. (2017) and demonstrated its use in improving subpopulation miscalibration. CONCLUSION: A postprocessing and model-independent fairness algorithm for recalibration of predictive models greatly decreases the bias of subpopulation miscalibration and thus increases fairness and equality.
Authors: Andrew P DeFilippis; Rebekah Young; Christopher J Carrubba; John W McEvoy; Matthew J Budoff; Roger S Blumenthal; Richard A Kronmal; Robyn L McClelland; Khurram Nasir; Michael J Blaha Journal: Ann Intern Med Date: 2015-02-17 Impact factor: 25.391
Authors: David C Goff; Donald M Lloyd-Jones; Glen Bennett; Sean Coady; Ralph B D'Agostino; Raymond Gibbons; Philip Greenland; Daniel T Lackland; Daniel Levy; Christopher J O'Donnell; Jennifer G Robinson; J Sanford Schwartz; Susan T Shero; Sidney C Smith; Paul Sorlie; Neil J Stone; Peter W F Wilson Journal: J Am Coll Cardiol Date: 2013-11-12 Impact factor: 24.094
Authors: Ralph B D'Agostino; Ramachandran S Vasan; Michael J Pencina; Philip A Wolf; Mark Cobain; Joseph M Massaro; William B Kannel Journal: Circulation Date: 2008-01-22 Impact factor: 29.690
Authors: Sadiya S Khan; Noam Barda; Philip Greenland; Noa Dagan; Donald M Lloyd-Jones; Ran Balicer; Laura J Rasmussen-Torvik Journal: Am J Cardiol Date: 2022-01-12 Impact factor: 2.778
Authors: H Echo Wang; Matthew Landers; Roy Adams; Adarsh Subbaswamy; Hadi Kharrazi; Darrell J Gaskin; Suchi Saria Journal: J Am Med Inform Assoc Date: 2022-07-12 Impact factor: 7.942
Authors: Jorge A Morgan-Benita; Carlos E Galván-Tejada; Miguel Cruz; Jorge I Galván-Tejada; Hamurabi Gamboa-Rosales; Jose G Arceo-Olague; Huizilopoztli Luna-García; José M Celaya-Padilla Journal: Healthcare (Basel) Date: 2022-07-22
Authors: Jonathan H Lu; Alison Callahan; Birju S Patel; Keith E Morse; Dev Dash; Michael A Pfeffer; Nigam H Shah Journal: JAMA Netw Open Date: 2022-08-01