| Literature DB >> 35669570 |
Jakob Mökander1, Luciano Floridi1,2.
Abstract
Ethics-based auditing (EBA) is a structured process whereby an entity's past or present behaviour is assessed for consistency with moral principles or norms. Recently, EBA has attracted much attention as a governance mechanism that may help to bridge the gap between principles and practice in AI ethics. However, important aspects of EBA-such as the feasibility and effectiveness of different auditing procedures-have yet to be substantiated by empirical research. In this article, we address this knowledge gap by providing insights from a longitudinal industry case study. Over 12 months, we observed and analysed the internal activities of AstraZeneca, a biopharmaceutical company, as it prepared for and underwent an ethics-based AI audit. While previous literature concerning EBA has focussed on proposing or analysing evaluation metrics or visualisation techniques, our findings suggest that the main difficulties large multinational organisations face when conducting EBA mirror classical governance challenges. These include ensuring harmonised standards across decentralised organisations, demarcating the scope of the audit, driving internal communication and change management, and measuring actual outcomes. The case study presented in this article contributes to the existing literature by providing a detailed description of the organisational context in which EBA procedures must be integrated to be feasible and effective.Entities:
Keywords: Artificial intelligence; Auditing; Case study; Ethics; Governance; Industry; Practice
Year: 2022 PMID: 35669570 PMCID: PMC9152664 DOI: 10.1007/s43681-022-00171-7
Source DB: PubMed Journal: AI Ethics ISSN: 2730-5953
AstraZeneca’s principles for ethical data and AI usage
| Principle | Operationalisation |
|---|---|
| Private and secure | We respect privacy and act in a manner compatible with intended data use |
| We employ Data & AI Systems that are designed to be secure | |
| Explainable and transparent | We are open about the use, strengths, and limitations of our Data & AI systems |
| We ensure assumptions are clear, algorithms are appropriately documented, decisions are explainable, and processes to manage unanticipated consequences | |
| Fair | We endeavour to use robust, inclusive datasets in our Data & AI systems |
| We treat people and communities fairly and equitably in the design, process, and outcome distribution of our AI systems | |
| Accountable | We apply governance proportional to the impact and risk of Data & AI systems |
| We anticipate and mitigate the impact of potential unfavourable consequences of AI through testing, governance, and procedures | |
| Human-centric and socially beneficial | Where Data & AI is involved, humans oversee the system and are accountable for driving clear, expected benefits to people and society |
| We employ human-led governance over our AI systems. We respect human dignity and autonomy and strive to reflect this in our AI systems |