| Literature DB >> 35846557 |
Sara Solarova1, Juraj Podroužek1, Matúš Mesarčík2, Adrian Gavornik1, Maria Bielikova1.
Abstract
This paper contributes to the discussion on effective regulation of facial recognition technologies (FRT) in public spaces. In response to the growing universalization of FRT in the United States and Europe as merely intrusive technology, we propose to distinguish scenarios in which the ethical and social risks of using FRT are unattainable from other scenarios in which FRT can be adjusted to improve our everyday lives. We suggest that the general ban of FRT technologies in public spaces is not an inevitable solution. Instead, we advocate for a risk-based approach with emphasis on different use-cases that weighs moral risks and identifies appropriate countermeasures. We introduce four use-cases that focus on presence of FRT on entrances to public spaces (1) Checking identities in airports (2) Authorisation to enter office buildings (3) Checking visitors in stadiums (4) Monitoring passers-by on open streets, to illustrate the diverse ethical and social concerns and possible responses to them. Based on the different levels of ethical and societal risks and applicability of respective countermeasures, we call for a distinction of public spaces between semi-open public spaces and open public spaces. We suggest that this distinction of public spaces could not only be helpful in more effective regulation and assessment of FRT in public spaces, but also that the knowledge of different risks and countermeasures will lead to better transparency and public awareness of FRT in diverse scenarios.Entities:
Keywords: AI ethics; AI regulation; Airports; Countermeasures; Facial recognition; Office building; Open streets; Public spaces; Semi-open public spaces; Stadiums; Transparency; Trustworthy AI
Year: 2022 PMID: 35846557 PMCID: PMC9274635 DOI: 10.1007/s43681-022-00194-0
Source DB: PubMed Journal: AI Ethics ISSN: 2730-5953
Examples of ethical and societal concerns in FRT
| Overarching principles and values for trustworthy AI | Examples of ethical and societal concerns in FRT |
|---|---|
| Transparency | Chilling effects, interpretability and explainability |
| Privacy and data governance | Forced recognition, data control |
| Technical robustness and safety | False positives, false negatives |
| Diversity, non-discrimination, and fairness | Underrepresentation, social exclusion |
| Human agency and oversight | Mute individuals, over-reliance |
Examples of ethical risks considering FRT and different applicability of its countermeasures
| Examples of risks | Examples of countermeasures | Applicability of countermeasures |
|---|---|---|
| People will be not aware about FRT purpose and aims | Inform people about the use of FRT technologies before entering the area and explain what a person can expect before opt-in | UC1 Airports—Good UC2 Companies—Good UC3 Stadiums—Good |
| People will be forced to be analysed by FRT | Provide separate entrances for conventional access | UC1 Airports—Good UC2 Premises—Good UC3 Stadiums—Good |
| Individual data will be stored and used for other purposes | Decrease the amount of time for which biometric personal data can be stored and require detailed logs of data processing | UC1 Airports—Good UC2 Premises—Good UC3 Stadiums—Good UC4 Streets—Good |
| FRT will undermine human autonomy and the right to be heard in public places | Ensure that biometric identification will serve as an alternative to the traditional forms of identification, not its complete substitute | UC1 Airports—Good UC2 Premises—Good UC3 Stadiums—Good |
Bold refers to the emphasis on the inability to mitigate the ethical risks associated with open-public spaces