| Literature DB >> 35582329 |
Andreas Cebulla1, Zygmunt Szpak2, Catherine Howell3, Genevieve Knight4, Sazzad Hussain5.
Abstract
Artificial Intelligence (AI) is taking centre stage in economic growth and business operations alike. Public discourse about the practical and ethical implications of AI has mainly focussed on the societal level. There is an emerging knowledge base on AI risks to human rights around data security and privacy concerns. A separate strand of work has highlighted the stresses of working in the gig economy. This prevailing focus on human rights and gig impacts has been at the expense of a closer look at how AI may be reshaping traditional workplace relations and, more specifically, workplace health and safety. To address this gap, we outline a conceptual model for developing an AI Work Health and Safety (WHS) Scorecard as a tool to assess and manage the potential risks and hazards to workers resulting from AI use in a workplace. A qualitative, practice-led research study of AI adopters was used to generate and test a novel list of potential AI risks to worker health and safety. Risks were identified after cross-referencing Australian AI Ethics Principles and Principles of Good Work Design with AI ideation, design and implementation stages captured by the AI Canvas, a framework otherwise used for assessing the commercial potential of AI to a business. The unique contribution of this research is the development of a novel matrix itemising currently known or anticipated risks to the WHS and ethical aspects at each AI adoption stage.Entities:
Keywords: AI Canvas; Australia; Ethics principles; Risk assessment; WHS/OHS; Workers
Year: 2022 PMID: 35582329 PMCID: PMC9098376 DOI: 10.1007/s00146-022-01460-9
Source DB: PubMed Journal: AI Soc ISSN: 0951-5666
Fig. 1Key characteristics of work. Source: Safe Work Australia 2015, p.9
Higher-level aggregates of the AI ethics principles
| Human condition | Worker safety | Oversight |
|---|---|---|
Source: Authors, based on DISER (undated)
Fig. 2Conceptual integration of AI Canvas, AI ethics principles and safe work characteristics. Sources: 1Agrawal et al. (2018a); 2DISER (undated); 3Safe Work Australia (2015)
Revised AI WHS Scorecard with examples of AI WHS risks identified in the literature and the workshops
| Main stages of development | AI Canvas | AI WHS Principles | Examples* | Safework Characteristics of work & hazards/risks | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Human condition | Worker safety | Oversight | ||||||||||
| Human, social and environmental wellbeing | Human-centred values | Fairness | Privacy protection and security | Reliability and safety | Transparency and explainability | Contestability | Accountability | |||||
| Ideation | • Using AI when an alternative solution may be more appropriate or humane. [5,12] • The system displacing rather than augmenting human decisions. [3] • Augmenting or displacing human decisions with differential impact on workers who are directly or indirectly affected. [7,9,13] • The resolution of uncertainty affecting ethical, moral or social principles. [9,11,14] | • Overconfidence in or overreliance on AI system, resulting in loss of/diminished due diligence. [3,7] | • Inadequate or no specification and/or communication of purpose for AI use/an identified AI solution. [2,7,9,15,16] | Predicting a worker's physical or mental exhaustion levels for monitoring purposes without instituting strategies to prevent exhaustion in the future (worker safety) | Psychological—Work demands | |||||||
• (Insufficient consideration given to) unintended consequences of false negatives and false positive. [2,4,11,12] • AI being used out of scope. [3,4,7] • AI undermining company core values and societal expectations. [5,14] • AI system undermining human capabilities. [5] • trading off the personal flourishing (intrinsic value) in favour of organisational gain (instrumental good). [14] | • Technical failure, human error, financial failure, security breach, data loss, injury, industrial accident/disaster. [1,7,16] • Impacting on other processes or essential services affecting workflow or working conditions. [1,13] | • Insufficient/ineffective transparency, contestability and accountability at the design stage and throughout the development process. [12,16] | False negatives or false positive disadvantage or victimise a worker, causing stress, overwork, ergonomic risks, anxiety, boredom, fatigue and burnout, potentially building barriers between people, facilitating harassment or bullying (human condition) | Psychological- Work demands | ||||||||
• Inequitable or burdensome treatment of workers. [1,10] • gaming (reward hacking) of AI system undermining workplace relations. [4,16] • Worker attributing intelligence or empathy to AI system greater than appropriate.[3] • Context stripping from communication between employees.[3] • Worker manipulation or exploitation. [5,7] • Undue reliance on AI decisions. [3,7] | • Adversely affecting worker or general rights (to a safe workplace/physical integrity, pay at right rate/EA, adherence to National Employment Standards, privacy). [1,7] • Unnecessary harm, avoidable death or disabling injury/ergonomics. [1,7,8,16] • Physical and psychosocial hazards. [3,16] | • Inadequate or closed chain of accountability, reporting and governance structure for AI ethics within the organisation, with limited or no scope for review. [7,10,14] • (lack of process) for triggering human oversight or checks and balances, so that algorithmic decisions cannot be challenged, contested, or improved. [3,9] • AI shifting responsibility outside existing managerial or company protocols, and channels of internal accountability (via out- or sub-contracting). [13] | A workflow management system disproportionately, repeatedly or persistently assigns some workers to challenging tasks that others with principally identical roles can thus avoid (human condition) | Cognitive-Complexity and duration | ||||||||
| Development | • Chosen outcome measure not aligning with healthy/collegial workplace dynamics. [1,7] • Outcome measure resulting in worker-AI interface adversely affecting the status of a worker/workers in the workplace. [3] | • Performance measures differentially and/or adversely affecting work tasks and processes. [2,6,10] | • Workers (not) able to access and/or modify factors driving the outcomes of decisions. [2,3,9,16] | Efficiency improvements have differential effects across the workforce, improving conditions for some, but not others, or creating or promoting competitive behaviours, undermining collaborations or collegial relations (human condition, worker safety) | Psychological- Organisation justice | |||||||
• Training data not representing the target domain in the workplace. [7,15] • Acquisition, collection and analysis of data revealing (confidential) information out of scope of the project. [7] • data not being fit for purpose [5,8,11,16] | • Cyber security vulnerability. [1,11] • (In)sufficient consideration given to interconnectivity/ interoperability of AI systems. [2,9] | • Inadequate data logs (inputs/outputs of the AI) or data narratives (mapping origins and lineage of data), adversely affecting ability to conduct data audits or routine M&E. [7,9,10,12] • (Rapid AI introduction resulting in) inadequate testing of AI in a production environment and/or for impact on different (target) populations. [2,4] | Training data for a new system of leave and sick leave projections include only more recent workplace recruits with shorter tenure for whom better contextual data are available (human condition) | Psychological- Organisation justice | ||||||||
• Discontinuity of service. [1,13] • Worker unable or unwilling to provide or permit data to be used as input to the AI. [9,15] | • Impacting on physical workplace (lay out, design, environmental conditions: temperature, humidity). [10,15] • (In)secure data storage and cyber security vulnerability. [1,2,7,10,16] • Worker competences and skills (not) meeting AI requirements. [13] • Boundary creep: data collection (not) ceasing outside the workplace. [8,15] | • Insufficient worker understanding of safety culture and safe behaviours applied to data and data processes within AI. [8,13] • Partial disclosure or audit of data uses (e.g., due to commercial considerations, proprietary knowledge). [14,15] | A workforce planning tool omits timely correction for seasonal factors, trends or shocks, leading to a shortage of staff or produce at key times (human condition) | Cognitive- Complexity and duration | ||||||||
| Application | • Assessment processes requiring review due to new approach or tool. [9] • Identifiable personal data retained longer than necessary for the purpose it was collected and/or processed. [10] | • Inadequate integration of AI operational management into routine maintenance ensuring AI continues to work as initially specified. [3,4,8,16] • No offline systems or processes in place to test and review veracity of AI predictions/decisions. [9] | A new HR recruitment process using AI achieves a more gender-balanced intake of new staff. Do the data input or algorithm require review to maintain this outcome? (worker safety) | Cognitive—Psychological, Information processing load, complexity and duration, organisation justice | ||||||||
Legend: Numbered citations refer to the following sources
*Examples pertain to human conditions ethics principle (first Column) and the AI Canvas item in the same row. For the AI Canvas ‘Feedback’ item, the example relates to the worker safety ethics principle
[1] ADAPT Centre et al. (2017); [2] AiGlobal (undated); [3] Amodei et al. (2016); [4] Beard and Longstaff (2018); [5] IEEE (undated); [6] Matsumoto and Ema (2020); [7] ODI (2019); [8] TNO (undated); [9] UK Cabinet Office (2020); [10] van de Poel (2016); [11] Walmsley (2020); [12] WEF (2020); [13] Wikipedia. (2020); [14] Public online Workshop