| Literature DB >> 28632438 |
Abstract
Recidivism prediction instruments (RPIs) provide decision-makers with an assessment of the likelihood that a criminal defendant will reoffend at a future point in time. Although such instruments are gaining increasing popularity across the country, their use is attracting tremendous controversy. Much of the controversy concerns potential discriminatory bias in the risk assessments that are produced. This article discusses several fairness criteria that have recently been applied to assess the fairness of RPIs. We demonstrate that the criteria cannot all be simultaneously satisfied when recidivism prevalence differs across groups. We then show how disparate impact can arise when an RPI fails to satisfy the criterion of error rate balance.Keywords: bias; disparate impact; fair machine learning; recidivism prediction; risk assessment
Mesh:
Year: 2017 PMID: 28632438 DOI: 10.1089/big.2016.0047
Source DB: PubMed Journal: Big Data ISSN: 2167-6461 Impact factor: 2.128