| Literature DB >> 34934006 |
Michael S Bernstein1, Margaret Levi2,3, David Magnus4, Betsy A Rajala5, Debra Satz6, Charla Waeiss5.
Abstract
Researchers in areas as diverse as computer science and political science must increasingly navigate the possible risks of their research to society. However, the history of medical experiments on vulnerable individuals influenced many research ethics reviews to focus exclusively on risks to human subjects rather than risks to human society. We describe an Ethics and Society Review board (ESR), which fills this moral gap by facilitating ethical and societal reflection as a requirement to access grant funding: Researchers cannot receive grant funding from participating programs until the researchers complete the ESR process for their proposal. Researchers author an initial statement describing their proposed research's risks to society, subgroups within society, and globally and commit to mitigation strategies for these risks. An interdisciplinary faculty panel iterates with the researchers to refine these risks and mitigation strategies. We describe a mixed-method evaluation of the ESR over 1 y, in partnership with a large artificial intelligence grant program at our university. Surveys and interviews of researchers who interacted with the ESR found 100% (95% CI: 87 to 100%) were willing to continue submitting future projects to the ESR, and 58% (95% CI: 37 to 77%) felt that it had influenced the design of their research project. The ESR panel most commonly identified issues of harms to minority groups, inclusion of diverse stakeholders in the research plan, dual use, and representation in datasets. These principles, paired with possible mitigation strategies, offer scaffolding for future research designs.Entities:
Keywords: computer science; ethics; machine learning; societal consequences
Year: 2021 PMID: 34934006 PMCID: PMC8719852 DOI: 10.1073/pnas.2117261118
Source DB: PubMed Journal: Proc Natl Acad Sci U S A ISSN: 0027-8424 Impact factor: 11.205
Fig. 1.The ESR process accepts initial statements from researchers when they submit the grant then iterates with them prior to releasing funding.
The IRB is focused on risks to human subjects, whereas ESR is focused on risks to society, groups within society, and to the world
| IRB | ESR | |
| Focus | Significant risks to human subjects | Significant risks to societies, to groups within those societies, and to the world |
| Requirement | Data collection cannot begin, and funds cannot be spent to pay research participants, until IRB protocol is approved | Grant funding cannot be released by the funding program until the ESR process has completed |
| Submission | Specifics of research design, including any procedure followed by research participants and any materials shown to research participants | Goal of research, direct and indirect stakeholders, and higher-level research design. Articulation of principles to mitigate negative outcomes and description of how those principles are instantiated in the research design |
| Timing | Regular (e.g., monthly) deadline | Synchronized with grant funding programs |
| Possible outcomes | Approval, return with comments, (rare:) rejection | Approval, return with comments, request synchronous conversation, (rare:) rejection |
| Amendment and renewal | Protocols must be amended if the research design changes, and expire after a fixed period (e.g., 3 y) | Protocols will be examined annually as part of the researcher’s annual grant report to the funding institution |
To enable engagement with the ESR early in the research lifecycle, researchers work with the ESR prior to funding’s being released.
Fig. 2.All participants were willing to engage in the ESR process again.
Fig. 3.Sixty-seven percent of researchers who iterated with the ESR, and 58% of all researchers, felt that the ESR process had influenced the design of their project.
Risk themes raised in the ESR process
| Theme | Researcher statement frequency ( | Panelist response frequency ( | Refers to issues that pertain to… |
| Representativeness | 18 | 6 | Any risks or concerns that arise from insufficient or unequal representation of data, participants, or intended user population (e.g., excluding international or low-income students in a study of student well-being) |
| IRB purview | 14 | 6 | Any risks or concerns regarding the research that fall under IRB purview (e.g., participant consent, data security, etc.) |
| Diverse design and deployment | 13 | 8 | Incorporating relevant stakeholders and diverse perspectives in the project design and deployment processes (e.g., consulting with parents who have been historically disadvantaged to develop fairer school choice mechanisms) |
| Dual use | 10 | 8 | Any risks or concerns that arise due to the technology being coopted for nefarious purposes or by motivated actors (e.g., an authoritarian government employed mass surveillance methods) |
| Harms to society | 10 | 5 | Potential harms to any population that could arise following from the research (e.g., job loss due to automation) |
| Harms to subgroups | 7 | 11 | Potential harms to specific subgroup populations that could arise following from the research (e.g., technical barriers to using an AI that is prohibitive to poorer populations) |
| Privacy | 4 | 1 | Any risks or concerns related to general expectations of privacy or control over personally identifiable information (e.g., consequences of mass surveillance systems for individuals’ control over their information) |
| Research transparency | 3 | 0 | Sufficiently and accessibly providing information such that others can understand and effectively employ the research, where appropriate (e.g., training modules for interpreting an AI model) |
| Accountability | 2 | 2 | Questions of assigning responsibility or holding actors accountable for potential harms that may arise (e.g., how to assign responsibility for a mistake when AI is involved) |
| Other | 2 | 3 | Other issues not covered above (e.g., intellectual property concerns) |
| Tool or user error | 2 | 4 | Any risks or concerns that arise from tool/model malfunction or user error (e.g., human misinterpretation of an AI model in decision-making) |
| Collaborator | 1 | 1 | Any risks or concerns that specifically relate to a collaborator on the research project (e.g., whether a collaborator could credibly commit to a project on inclusivity when their platform was notorious for exclusive and harmful behavior) |
| Methods and merit | 1 | 2 | Any risks or concerns reserved for methods and merit reviews of the grant proposal (e.g., whether model specifications are appropriate for the goals of the research) |
| Publicness | 0 | 2 | Questions of using publicly available data for research when those that generated the data are unaware of researchers’ intended use of their data (e.g., use of Twitter data without obtaining express consent from affected Twitter users) |
The researchers, in their ESR statements, were most likely to raise issues of representativness. The panelists, in their feedback, were most likely to raise issues regarding harms to subgroups. Both researchers and panelists also commonly focused on diverse design and deployment, dual-use concerns, harms to society, and issues pertaining to IRB purview.
Fig. 4.Researchers were generally in favor of the ESR’s being empowered to reject proposals if necessary.