| Literature DB >> 35910189 |
Matthew Cole1, Callum Cant1, Funda Ustek Spilda1, Mark Graham1.
Abstract
Calls for "ethical Artificial Intelligence" are legion, with a recent proliferation of government and industry guidelines attempting to establish ethical rules and boundaries for this new technology. With few exceptions, they interpret Artificial Intelligence (AI) ethics narrowly in a liberal political framework of privacy concerns, transparency, governance and non-discrimination. One of the main hurdles to establishing "ethical AI" remains how to operationalize high-level principles such that they translate to technology design, development and use in the labor process. This is because organizations can end up interpreting ethics in an ad-hoc way with no oversight, treating ethics as simply another technological problem with technological solutions, and regulations have been largely detached from the issues AI presents for workers. There is a distinct lack of supra-national standards for fair, decent, or just AI in contexts where people depend on and work in tandem with it. Topics such as discrimination and bias in job allocation, surveillance and control in the labor process, and quantification of work have received significant attention, yet questions around AI and job quality and working conditions have not. This has left workers exposed to potential risks and harms of AI. In this paper, we provide a critique of relevant academic literature and policies related to AI ethics. We then identify a set of principles that could facilitate fairer working conditions with AI. As part of a broader research initiative with the Global Partnership on Artificial Intelligence, we propose a set of accountability mechanisms to ensure AI systems foster fairer working conditions. Such processes are aimed at reshaping the social impact of technology from the point of inception to set a research agenda for the future. As such, the key contribution of the paper is how to bridge from abstract ethical principles to operationalizable processes in the vast field of AI and new technology at work.Entities:
Keywords: artificial intelligence; collective bargaining; ethics; industrial relations; job quality; labor; technological change; work
Year: 2022 PMID: 35910189 PMCID: PMC9334705 DOI: 10.3389/frai.2022.869114
Source DB: PubMed Journal: Front Artif Intell ISSN: 2624-8212
Summary of four illustrative AI principles.
|
|
|
|
|---|---|---|
| OECD | (1) Regular engagement of multiple external and internal stakeholders; (2) mechanisms for independent oversight; (3) transparency around decision-making procedures; (4) justifiable standards based on evidence; (5) clear, enforceable legal frameworks and regulations. | Affirms the importance of international labor rights. Suggests that workers should be aware of their interactions with AI systems. Encourages governments to prepare for “labor market transition” through skill development social dialogue, and promoting increases in safety and job quality. |
| UNESCO | (1) Proportionality and “do no harm”; (2) safety and security, fairness and non-discrimination; (3) sustainability, right to privacy and data protection; (4) human oversight and determination; (5) transparency and explainability; (6) responsibility and accountability, awareness and literacy; (7) multistakeholder and adaptive governance and collaboration. | Encourages governments to implement impact assessments that monitor, amongst other things, the effect of AI on labor rights, Strongly emphasizes the need for skill development, retraining and “fair transition” for at-risk employees. States the need for ongoing research on the impact of AI systems on work. |
| European Parliament | (1) Human agency and oversight; (2) robustness and safety; (3) privacy and data governance; (4) transparency; (5) diversity, non-discrimination and fairness; (6) societal and environmental well-being; (7) accountability | Notes concern about impact on labor market and describes workers as one of nine relevant stakeholder groups. |
| President of the United States | (1) Lawful and respectful of our Nation's values; (2) purposeful and performance-driven; (3) accurate, reliable, and effective; (4) safe, secure, and resilient; (5) understandable; (6) responsible and traceable; (7) regularly monitored; transparent; (8) accountable. | None. |
Nine draft principles for the GPAI's “Fair Work for AI” project.
| 1 | Guarantee decent work | The right to decent work has been extensively established. The introduction of AI to a labor process is no excuse for undermining basic labor standards. We also cannot assume that decent work conditions are going to be provided |
| 2 | Build fair supply chains | AI development is not conducted in isolation. The requirement to pursue fair conditions must apply across the supply chain, and organizations have a responsibility to use their procurement power toward that end and should be held accountable of the practices of the parties they subcontract parts of their work |
| 3 | Promote explainability | Workers have a right to understand how the use of AI impacts their work. Organizations must respect this right and provide detailed, understandable resources to allow workers to exercise it |
| 4 | Strive for equity | The way AI is produced means that it is never purely objective. So, the values used to design AI need to be open for discussion and evaluation with the goal of minimizing both algorithmic bias and patterned inequality |
| 5 | Make fair decisions | The automation of decision making can lead to a loss of accountability, but mere human oversight over decision making doesn't guarantee fair decisions either. By combining a strong right of appeal with a process to implement lessons learned, organizations can create a robust system which harnesses the power of AI whilst delivering fairer decisions that take into account limitations to resources and socio-economic opportunities, but aims to reduce injustices in their allocation as much as possible |
| 6 | Use data fairly | The concentration of data can create risk both for individual persons and groups, so limits must be put on collection (i.e., data minimization) and processes created for subjects to access their personal data in a comprehensive and explainable format. There should be opportunities for individuals to learn and increase their understanding about potential data risks, so that they are able to question and when necessary, reject, decisions made about them |
| 7 | Enhance safety | The right to healthy, safe working environments must be protected. Advances in algorithmic management have increased the risks of work intensification and surveillance. Organizations should seek to actively improve health and safety through their technology |
| 8 | Create future-proof jobs | The introduction of workplace AI can cause specific risks such as job destruction and deskilling. These risks can be avoided by treating the introduction of AI as an opportunity to engage in a participatory and evolutionary redesign of work. This approach should mitigate the risks above and look to use the advantages conferred by the use of AI to increase job quality |
| 9 | Advance collective worker voice | By facilitating collective bargaining, stakeholders can create the conditions for productive negotiation to determine how to turn ethical principles into ethical practice. This also guarantees the principles to be embraced by a larger group of the society, and the developers and users of AI to be held accountable |