Literature DB >> 33236066

Addressing bias in prediction models by improving subpopulation calibration.

Noam Barda1,2,3, Gal Yona4, Guy N Rothblum4, Philip Greenland5, Morton Leibowitz1, Ran Balicer1,2, Eitan Bachmat6, Noa Dagan1,3,6.   

Abstract

OBJECTIVE: To illustrate the problem of subpopulation miscalibration, to adapt an algorithm for recalibration of the predictions, and to validate its performance.
MATERIALS AND METHODS: In this retrospective cohort study, we evaluated the calibration of predictions based on the Pooled Cohort Equations (PCE) and the fracture risk assessment tool (FRAX) in the overall population and in subpopulations defined by the intersection of age, sex, ethnicity, socioeconomic status, and immigration history. We next applied the recalibration algorithm and assessed the change in calibration metrics, including calibration-in-the-large.
RESULTS: 1 021 041 patients were included in the PCE population, and 1 116 324 patients were included in the FRAX population. Baseline overall model calibration of the 2 tested models was good, but calibration in a substantial portion of the subpopulations was poor. After applying the algorithm, subpopulation calibration statistics were greatly improved, with the variance of the calibration-in-the-large values across all subpopulations reduced by 98.8% and 94.3% in the PCE and FRAX models, respectively. DISCUSSION: Prediction models in medicine are increasingly common. Calibration, the agreement between predicted and observed risks, is commonly poor for subpopulations that were underrepresented in the development set of the models, resulting in bias and reduced performance for these subpopulations. In this work, we empirically evaluated an adapted version of the fairness algorithm designed by Hebert-Johnson et al. (2017) and demonstrated its use in improving subpopulation miscalibration.
CONCLUSION: A postprocessing and model-independent fairness algorithm for recalibration of predictive models greatly decreases the bias of subpopulation miscalibration and thus increases fairness and equality.
© The Author(s) 2020. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com.

Entities:  

Keywords:  Predictive ; models, algorithmic fairness, calibration, model bias, cardiovascular disease, osteoporosis

Year:  2021        PMID: 33236066      PMCID: PMC7936516          DOI: 10.1093/jamia/ocaa283

Source DB:  PubMed          Journal:  J Am Med Inform Assoc        ISSN: 1067-5027            Impact factor:   4.497


  27 in total

Review 1.  Predictive data mining in clinical medicine: current issues and guidelines.

Authors:  Riccardo Bellazzi; Blaz Zupan
Journal:  Int J Med Inform       Date:  2006-12-26       Impact factor: 4.046

2.  AI can be sexist and racist - it's time to make it fair.

Authors:  James Zou; Londa Schiebinger
Journal:  Nature       Date:  2018-07       Impact factor: 49.962

3.  An analysis of calibration and discrimination among multiple cardiovascular risk scores in a modern multiethnic cohort.

Authors:  Andrew P DeFilippis; Rebekah Young; Christopher J Carrubba; John W McEvoy; Matthew J Budoff; Roger S Blumenthal; Richard A Kronmal; Robyn L McClelland; Khurram Nasir; Michael J Blaha
Journal:  Ann Intern Med       Date:  2015-02-17       Impact factor: 25.391

4.  Potential Biases in Machine Learning Algorithms Using Electronic Health Record Data.

Authors:  Milena A Gianfrancesco; Suzanne Tamang; Jinoos Yazdany; Gabriela Schmajuk
Journal:  JAMA Intern Med       Date:  2018-11-01       Impact factor: 21.873

5.  Consent to the use of stored DNA for genetics research: a survey of attitudes in the Jewish population.

Authors:  M D Schwartz; K Rothenberg; L Joseph; J Benkendorf; C Lerman
Journal:  Am J Med Genet       Date:  2001-02-01

Review 6.  Calibration of the Pooled Cohort Equations for Atherosclerotic Cardiovascular Disease: An Update.

Authors:  Nancy R Cook; Paul M Ridker
Journal:  Ann Intern Med       Date:  2016-10-11       Impact factor: 25.391

7.  2013 ACC/AHA guideline on the assessment of cardiovascular risk: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines.

Authors:  David C Goff; Donald M Lloyd-Jones; Glen Bennett; Sean Coady; Ralph B D'Agostino; Raymond Gibbons; Philip Greenland; Daniel T Lackland; Daniel Levy; Christopher J O'Donnell; Jennifer G Robinson; J Sanford Schwartz; Susan T Shero; Sidney C Smith; Paul Sorlie; Neil J Stone; Peter W F Wilson
Journal:  J Am Coll Cardiol       Date:  2013-11-12       Impact factor: 24.094

8.  General cardiovascular risk profile for use in primary care: the Framingham Heart Study.

Authors:  Ralph B D'Agostino; Ramachandran S Vasan; Michael J Pencina; Philip A Wolf; Mark Cobain; Joseph M Massaro; William B Kannel
Journal:  Circulation       Date:  2008-01-22       Impact factor: 29.690

9.  A general cardiovascular risk profile: the Framingham Study.

Authors:  W B Kannel; D McGee; T Gordon
Journal:  Am J Cardiol       Date:  1976-07       Impact factor: 2.778

10.  External validation and comparison of three prediction tools for risk of osteoporotic fractures using data from population based electronic health records: retrospective cohort study.

Authors:  Noa Dagan; Chandra Cohen-Stavi; Maya Leventer-Roberts; Ran D Balicer
Journal:  BMJ       Date:  2017-01-19
View more
  9 in total

1.  Validation of Heart Failure-Specific Risk Equations in 1.3 Million Israeli Adults and Usefulness of Combining Ambulatory and Hospitalization Data from a Large Integrated Health Care Organization.

Authors:  Sadiya S Khan; Noam Barda; Philip Greenland; Noa Dagan; Donald M Lloyd-Jones; Ran Balicer; Laura J Rasmussen-Torvik
Journal:  Am J Cardiol       Date:  2022-01-12       Impact factor: 2.778

2.  Observability and its impact on differential bias for clinical prediction models.

Authors:  Mengying Yan; Michael J Pencina; L Ebony Boulware; Benjamin A Goldstein
Journal:  J Am Med Inform Assoc       Date:  2022-04-13       Impact factor: 4.497

3.  Interpretability and fairness evaluation of deep learning models on MIMIC-IV dataset.

Authors:  Chuizheng Meng; Loc Trinh; Nan Xu; James Enouen; Yan Liu
Journal:  Sci Rep       Date:  2022-05-03       Impact factor: 4.996

4.  Discrimination, trust, and withholding information from providers: Implications for missing data and inequity.

Authors:  Paige Nong; Alicia Williamson; Denise Anthony; Jodyn Platt; Sharon Kardia
Journal:  SSM Popul Health       Date:  2022-04-07

5.  A comparison of approaches to improve worst-case predictive model performance over patient subpopulations.

Authors:  Stephen R Pfohl; Haoran Zhang; Yizhe Xu; Agata Foryciarz; Marzyeh Ghassemi; Nigam H Shah
Journal:  Sci Rep       Date:  2022-02-28       Impact factor: 4.379

6.  Evaluating algorithmic fairness in the presence of clinical guidelines: the case of atherosclerotic cardiovascular disease risk estimation.

Authors:  Agata Foryciarz; Stephen R Pfohl; Birju Patel; Nigam Shah
Journal:  BMJ Health Care Inform       Date:  2022-04

7.  A bias evaluation checklist for predictive models and its pilot application for 30-day hospital readmission models.

Authors:  H Echo Wang; Matthew Landers; Roy Adams; Adarsh Subbaswamy; Hadi Kharrazi; Darrell J Gaskin; Suchi Saria
Journal:  J Am Med Inform Assoc       Date:  2022-07-12       Impact factor: 7.942

8.  Hard Voting Ensemble Approach for the Detection of Type 2 Diabetes in Mexican Population with Non-Glucose Related Features.

Authors:  Jorge A Morgan-Benita; Carlos E Galván-Tejada; Miguel Cruz; Jorge I Galván-Tejada; Hamurabi Gamboa-Rosales; Jose G Arceo-Olague; Huizilopoztli Luna-García; José M Celaya-Padilla
Journal:  Healthcare (Basel)       Date:  2022-07-22

9.  Assessment of Adherence to Reporting Guidelines by Commonly Used Clinical Prediction Models From a Single Vendor: A Systematic Review.

Authors:  Jonathan H Lu; Alison Callahan; Birju S Patel; Keith E Morse; Dev Dash; Michael A Pfeffer; Nigam H Shah
Journal:  JAMA Netw Open       Date:  2022-08-01
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.