| Literature DB >> 35538215 |
Thomas C Tsai1,2, Sercan Arik3, Benjamin H Jacobson4,5, Jinsung Yoon3, Nate Yoder3, Dario Sava3, Margaret Mitchell3, Garth Graham3, Tomas Pfister3.
Abstract
Racial and ethnic minorities have borne a particularly acute burden of the COVID-19 pandemic in the United States. There is a growing awareness from both researchers and public health leaders of the critical need to ensure fairness in forecast results. Without careful and deliberate bias mitigation, inequities embedded in data can be transferred to model predictions, perpetuating disparities, and exacerbating the disproportionate harms of the COVID-19 pandemic. These biases in data and forecasts can be viewed through both statistical and sociological lenses, and the challenges of both building hierarchical models with limited data availability and drawing on data that reflects structural inequities must be confronted. We present an outline of key modeling domains in which unfairness may be introduced and draw on our experience building and testing the Google-Harvard COVID-19 Public Forecasting model to illustrate these challenges and offer strategies to address them. While targeted toward pandemic forecasting, these domains of potentially biased modeling and concurrent approaches to pursuing fairness present important considerations for equitable machine-learning innovation.Entities:
Year: 2022 PMID: 35538215 PMCID: PMC9090910 DOI: 10.1038/s41746-022-00602-z
Source DB: PubMed Journal: NPJ Digit Med ISSN: 2398-6352