| Literature DB >> 36258241 |
Davy van de Sande1, Jasper van Bommel2, Eline Fung Fen Chung2, Diederik Gommers2, Michel E van Genderen2.
Abstract
Entities:
Keywords: Artificial intelligence; Bias; Equity; Intensive care
Mesh:
Year: 2022 PMID: 36258241 PMCID: PMC9578232 DOI: 10.1186/s13054-022-04197-5
Source DB: PubMed Journal: Crit Care ISSN: 1364-8535 Impact factor: 19.334
Fig. 1Schematic overview of the intensive care medicine artificial intelligence fairness audit. Conventional clinical patient data (e.g., vital signs, laboratory values, and demographics) are typically used to train an AI algorithm and its performance is then evaluated on an internal or external test dataset to see whether it works in the first place. Next, the fairness audit should take place: evaluate model performance across multiple subpopulations (for example, based on ethnicity, age, gender, or other characteristics). If concerns regarding algorithmic fairness arise, re-training and/or re-calibration should be considered (go/no-go). *Protected personal characteristics such as ethnicity, socioeconomic information, and others need to be collected in patient health records. AI = artificial intelligence