| Literature DB >> 34767085 |
Huw Roberts1, Josh Cowls1,2, Emmie Hine1, Francesca Mazzi1,3, Andreas Tsamados1, Mariarosaria Taddeo1,2, Luciano Floridi4,5.
Abstract
Over the past few years, there has been a proliferation of artificial intelligence (AI) strategies, released by governments around the world, that seek to maximise the benefits of AI and minimise potential harms. This article provides a comparative analysis of the European Union (EU) and the United States' (US) AI strategies and considers (i) the visions of a 'Good AI Society' that are forwarded in key policy documents and their opportunity costs, (ii) the extent to which the implementation of each vision is living up to stated aims and (iii) the consequences that these differing visions of a 'Good AI Society' have for transatlantic cooperation. The article concludes by comparing the ethical desirability of each vision and identifies areas where the EU, and especially the US, need to improve in order to achieve ethical outcomes and deepen cooperation.Entities:
Keywords: Artificial intelligence; European Union; Policy; Social good; United States
Mesh:
Year: 2021 PMID: 34767085 PMCID: PMC8587491 DOI: 10.1007/s11948-021-00340-7
Source DB: PubMed Journal: Sci Eng Ethics ISSN: 1353-3452 Impact factor: 3.525
Requirements for trustworthy AI (European Commission, 2019)
| Human agency and oversight | AI systems should allow humans to make informed decisions and be subject to proper oversight |
| Technical robustness and safety | AI systems need to be resilient, secure, safe, accurate, reliable and reproducible |
| Privacy and data governance | Adequate data governance mechanisms that fully respect privacy must be ensured |
| Transparency | The data, system and AI business models should be transparent and explainable to stakeholders |
| Diversity, non-discrimination and fairness | Unfair bias must be avoided to mitigate the marginalisation of vulnerable groups and the exacerbation of discrimination |
| Societal and environmental well-being | AI systems should be sustainable and benefit all human beings, including future generations |
| Accountability | Responsibility and accountability for AI systems and their outcomes should be ensured |
US Guidance for Regulation of AI Principles (Executive Office of the President, 2020)
| Public trust in AI | The government must promote reliable, robust and trustworthy AI applications |
| Public participation | The public should have a chance to participate in all stages of the rule-making process |
| Scientific integrity and information quality | Policy decisions should be based on science |
| Risk assessment and management | Agencies should decide which risks are unacceptable |
| Benefits and costs | Agencies should select approaches that maximise net benefits |
| Flexibility | Agencies should pursue a technology-neutral, flexible approach |
| Fairness and non-discrimination | Agencies should make sure AI systems do not discriminate illegally |
| Disclosure and transparency | Context-specific transparency measures are necessary for public trust |
| Safety and security | Agencies should promote AI systems that are safe, secure and operate as intended |
| Interagency coordination | Interagency cooperation and coordination is necessary for consistent policies |