| Literature DB >> 35818389 |
Abstract
This study examines employee perceptions on the effective adoption of artificial intelligence (AI) principles in their organizations. 49 interviews were conducted with employees of 24 organizations across 11 countries. Participants worked directly with AI across a range of positions, from junior data scientist to Chief Analytics Officer. The study found that there are eleven components that could impact the effective adoption of AI principles in organizations: communication, management support, training, an ethics office(r), a reporting mechanism, enforcement, measurement, accompanying technical processes, a sufficient technical infrastructure, organizational structure, and an interdisciplinary approach. The components are discussed in the context of business code adoption theory. The findings offer a first step in understanding potential methods for the effective adoption of AI principles in organizations.Entities:
Keywords: AI ethics; AI principles; Adoption; Artificial intelligence
Year: 2022 PMID: 35818389 PMCID: PMC9259894 DOI: 10.1007/s10551-022-05051-y
Source DB: PubMed Journal: J Bus Ethics ISSN: 0167-4544
Fig. 1An integrated research model for the effectiveness of AI principles,
adapted from Kaptein and Schwartz (2008)
Summary of components that impact the effective adoption of business codes
| Effective adoption components | References |
|---|---|
| Reach | Weaver et al. ( |
| Distribution channel | Adam and Rachman-Moore ( |
| Sign-off process | Schwartz ( |
| Reinforcement | Kaptein ( |
| Communication quality | Kaptein ( |
| External communication | Singh ( |
| Local management support | Kaptein ( |
| Senior management support | Kaptein ( |
| Existence of training | Adam and Rachman-Moore ( |
| Preferred trainers | Schwartz ( |
| Kaptein ( | |
| Existence of a reporting mechanism | Kaptein ( |
| Existence of a standardized procedure | Weaver et al. ( |
| Audits | Kaptein ( |
| Penalties | Adam and Rachman-Moore ( |
| Communicating violations | Schwartz ( |
| Incentive policies | Kaptein ( |
| Schwartz ( |
Summary of participant gender, tenure, and job position
| Participant sample breakdown | |
|---|---|
| Women | 31% |
| Men | 69% |
| Other | 0% |
| 4.5 years [1–7 years] | |
| Executives/Vice presidents | 39% |
| Managers | 55% |
| Non-managers | 6% |
Participant involvement in technical team, development, and adoption of AIPs
| Worked directly with technical teams on AI (21/49) | Did not work directly with technical teams on AI (28/49) | |
|---|---|---|
| Non-managers | 1 | 2 |
| Managers | 16 | 11 |
| Executives | 4 | 15 |
| Involved in the development and adoption of AIPs (28/49) | Only involved the adoption of AIPs (21/49) | |
| Non-managers | 2 | 1 |
| Managers | 11 | 16 |
| Executives | 15 | 4 |
Note that while the counts appear similar across groups, the groups are not composed of the same individuals
Fig. 2A summary of the conceptual development of the interview guide
Summary of components that could impact the effective adoption of AI principles
| Effective adoption components | Preliminary perceptions of employees | Summary of preliminary perceptions of employees |
|---|---|---|
| Reach | Potentially important | Two groups: “all AI employees should see them,” and “only managers need to worry about them” |
| Distribution channel | Potentially important | No preference between formal and informal channels; participatory design may help |
| Sign-off process | Potentially important | Not common practice today, but some organizations may look to implement in future |
| Reinforcement | Important | Multiple channels are used to increase communication frequency, starting with reinforcement as early as hiring stage |
| Communication quality | Important | Quality means clear definitions, aligning with marketing, and cultural relevance |
| External communication | Potentially important | Four groups: not shared, white paper/summary shared, shared, already publicly available; could be considered whitewashing |
| Local management support | Important | Support shown by speaking about the AIP, understanding the AIP, and general advocacy |
| Senior management support | Important | Support shown through talking about, knowing AIP; not expected to model behaviour |
| Existence of training | Important | Training important to educate non-AI practitioners on AI, and technical team on ethics; mandatory training may get pushback |
| Preferred trainers | Potentially important | Internal training important, unclear whether external training has different impact No clear preference between internal in-person or online training, direct or senior managers |
| Important | Specific AI ethics officer not necessarily important, but responsibility assigned to an individual or ethics panel is vital | |
| Existence of a reporting mechanism | Important | Malicious AI principles breaches use existing ethics reporting mechanism; non-malicious acts may not need a reporting mechanism but could benefit junior employees |
| Existence of a standardized procedures | Important | Formal operating procedures or board approved policies used, high priority to develop if not currently in place |
| Audits | Potentially important | Internal audits used for policy adherence and external audits use for technical adherence |
| Penalties | Potentially important | Existing penalties used for malicious AIP breaches, but none for non-malicious breaches |
| Communicating violations | Potentially important | Not important for malicious breaches; important to share non-malicious breaches via post-mortem |
| Incentive policies | Potentially important | No policies specific to AIPs yet, general ethics incentives cover AIPs in some instances, highly dependent on operating country |
| Potentially important | Future priority for some organizations, however only a handful are measuring today | |
| Important | Helps translate the AIPs into technical guidelines | |
| Complete AI inventory | Important | Aids in the distribution and tracking of principles |
| Data and system compatibility | Important | Data issues and legacy systems can prevent technical adoption |
| Important | Centralized AI teams make effective adoption easier | |
| Interdisciplinary teams | Important | Increased diversity of thought, especially from outside the AI team is important |
| Combining AI ethics with data ethics | Important | Integration of AI ethics and data ethics and/or privacy given the importance of data to AI |
| Hiring the right people | Potentially important | Important if AI ethics talent is not available internally |
| Engaging with third party experts | Important | Technology companies, AI vendors, and academia, AI ethics experts |
| Engaging with regulators | Potentially important | Dependent on willingness of regulator to engage |
Components are identified as “Important” if > 50% of participants discussed the topic, and “Potentially important” if 15–50% of participants discussed it
| Final component (Stage 3) | Grouped component (Stage 2) | Codes (Stage 1) | |
|---|---|---|---|
| Communication | Reach | All AI employees have read AIP | |
| Distribution channel | Location of AIPs known | ||
| Participatory design | |||
| First awareness of AIPs | |||
| AIP discussed in hiring | |||
| Sign-off process | Sign-off | ||
| Reinforcement | Communication channel | ||
| Lunch & learns on AI ethics | |||
| Internal conference | |||
| Employee community on AI ethics | |||
| Communication quality | Internal marketing campaign | ||
| Clear definitions | |||
| Healthy AI dialogue | |||
| Cultural relevance of AIP | |||
| External communication | Principles shared externally | ||
| Management support | Local management support | Direct manager prioritizes AIPs | |
| Direct manager is trained on AI ethics | |||
| Direct manager is aware of AIP | |||
| Senior management support | Top-down communication | ||
| Top management prioritizes AIPs | |||
| Top management is trained on AI ethics | |||
| Top management is aware of AIP | |||
| Executive engagement as barrier | |||
| Funding for more staff as a barrier | |||
| Training | Existence of training | Access to training | |
| Required training | |||
| Basic knowledge on AI | |||
| Onboarding training | |||
| Certification program | |||
| Data scientists aren't trained in ethics | |||
| Preferred trainer | Train the trainer | ||
| Ethics office(r) | Ethics Office(r) | AI ethics contact | |
| AI ethics panel | |||
| Clear responsibility for AI ethics | |||
| Reporting mechanism | Existence of a reporting mechanism | Reporting AI ethics concerns | |
| Existence of a standardized procedure | Reporting standardization | ||
| Board of directors approved policy | |||
| Enforcement | Audits | Consequences: auditing | |
| Penalties | Consequences: penalty | ||
| Communicating violations | Consequences: communication | ||
| Incentive policies | Reward for ethical behaviour | ||
| Measurement | Measurement | Measure AIP effectiveness | |
| No measurement mechanism | |||
| Accompanying technical processes | Accompanying Technical Processes | Consistent data science tool | |
| Adapting existing processes | |||
| Piloting process | |||
| Automated process | |||
| Integration in product/service development | |||
| Sufficient technical infrastructure | Complete AI inventory | AI project inventory | |
| Data and system compatibility | Legacy systems and data | ||
| Organizational structure | Organizational Structure | Organizational structure as barrier | |
| Interdisciplinary approach | Interdisciplinary teams | Interdisciplinary teams | |
| Combining AI ethics with data ethics | Data ethics combined with AI ethics | ||
| Hiring the right people | Hiring the right people | ||
| Engaging with third party experts | Leading engagement with regulators | ||
| Leading engagement with academia | |||