| Literature DB >> 35012941 |
Laura Sikstrom1,2, Marta M Maslej3, Katrina Hui3,4, Zoe Findlay5, Daniel Z Buchman3,6, Sean L Hill3,4.
Abstract
OBJECTIVES: Fairness is a core concept meant to grapple with different forms of discrimination and bias that emerge with advances in Artificial Intelligence (eg, machine learning, ML). Yet, claims to fairness in ML discourses are often vague and contradictory. The response to these issues within the scientific community has been technocratic. Studies either measure (mathematically) competing definitions of fairness, and/or recommend a range of governance tools (eg, fairness checklists or guiding principles). To advance efforts to operationalise fairness in medicine, we synthesised a broad range of literature.Entities:
Keywords: artificial intelligence; health equity; health services research; machine learning; patient-centered care
Mesh:
Year: 2022 PMID: 35012941 PMCID: PMC8753410 DOI: 10.1136/bmjhci-2021-100459
Source DB: PubMed Journal: BMJ Health Care Inform ISSN: 2632-1009
Figure 1Three pillars of fairness.
Key dimension of fairness in the literature review by discipline (n=213)
| Research field | Fairness dimension | Specific attribute | Volume of articles by specific attributes |
| Computational sciences | Transparency | Interpretability/explainability | + + + |
| Medicine | Transparency | Interpretability/explainability | + |
| Social sciences | Transparency | Interpretability/explainability | + + + |
| Interdisciplinary research teams | Transparency | Interpretability/explainability | + + |
++++The majority of the literature reviewed in this field.
+++Several peer reviewed articles (five or more).
++A small number of peer reviewed articles (less than five).
+Little or no known literature (two or less).
Three pillars for fairness
| Fairness pillar | Source of unfairness | Challenge: | Attribute | Key questions |
| Transparency: A range of methods designed to see, understand and hold complex algorithmic systems accountable in a timely fashion. | ‘Like Gods, these mathematical models were opaque, their workings invisible to all but the highest priests in their domain: mathematicians and computer scientists’ (O’Neil: 3) | How can we foster democratic and sustained debate on the role of AI/ML in healthcare with a range of stakeholders, including patients experiencing complex and serious mental illness and/or addiction? | Interpretable | Are biases from predictive care models carried over across samples and settings? |
| Explainable | Which model features are contributing to bias and what kinds of assumptions do they amplify? How does an understanding of these features by stakeholders impact clinical care? | |||
| Accountable | How does predictive care impact stakeholders (patients, families, nurses, social workers)? What governance structures are in place to ensure fair development and deployment? Who is responsible for identifying and reporting potential harms? | |||
| Impartiality: Health care should be free from unfair bias and systemic discrimination. | ‘AI can help reduce bias, but it can also bake in and scale bias’ (Silberg and Manyika:2) | How are complex | Provenance | Do predictive care model features reflect socio-economic and political inequities? Might these features contribute to biased performance? |
| Implementation | What harms might result from the implementation of predictive care models? Do they disproportionately affect certain groups? | |||
| Inclusion: The process of improving the ability, opportunity, and dignity of people, disadvantaged on the basis of their identity, to access health services, receive compassionate care and achieve equitable treatment outcomes. | ‘Randomised trials estimate average treatment effects for a trial population, but participants in clinical trials often aren’t representative of the patient popuation that ultimately receives the treatment’ (Chen: 167). | How can we ensure that the benefits of advances in clinical AI accrue to the most structurally disadvantaged? | Completeness | Is information required to detect bias missing? Is there sufficient data to evaluate predictive care models for intersectional bias? Are marginalised groups involved in the collection and use of their data? |
| Patient and Family Engagement | Have stakeholders been involved in the development and implementation of predictive care? Do patients perceive models as being fair or positively impacting their care? |
ML, machine learning.
The three fairness pillars, their attributes and relation to ML-based prediction of inpatient violence in psychiatric settings
| Pillar | Attribute | Relation to predictive care |
| Transparency | Interpretability | ML models achieve high accuracy in predicting violent behaviour in psychiatric settings. |
| Explainability | ML models are often trained on structured risk assessment scores. | |
| Accountability | ML models have been trained on actigraphy features to predict aggression in patients with dementia. | |
| Impartiality | Provenance | Prior conviction and a diagnosis of schizophrenia are predictors of violence. |
| Implementation | ML modelling of violence risk is in part motivated by a desire to allocate staff resources to high-risk patients, but staff-patient interactions are known antecedents to violent behaviours. | |
| Inclusion | Completeness | A focus on legally protected categories may disregard biases related to unobserved characteristics (eg, sexual orientation or disability). Individuals with invisible or undiagnosed disabilities (eg, autism spectrum disorder) may display behaviours interpreted as precursors to violence or aggression. |
| Patient and family engagement | Collaboration in decision making during admission and maximising choice are important values for patients in settings where autonomy is limited. |
ML, machine learning.