| Literature DB >> 34790946 |
Christoph Ebell1,2, Ricardo Baeza-Yates3, Richard Benjamins4, Hengjin Cai5, Mark Coeckelbergh6, Tania Duarte7, Merve Hickok8, Aurelie Jacquet9, Angela Kim10, Joris Krijger11, John MacIntyre12, Piyush Madhamshettiwar13, Lauren Maffeo14, Jeanna Matthews15, Larry Medsker16, Peter Smith12, Savannah Thais17.
Abstract
The recent incidents involving Dr. Timnit Gebru, Dr. Margaret Mitchell, and Google have triggered an important discussion emblematic of issues arising from the practice of AI Ethics research. We offer this paper and its bibliography as a resource to the global community of AI Ethics Researchers who argue for the protection and freedom of this research community. Corporate, as well as academic research settings, involve responsibility, duties, dissent, and conflicts of interest. This article is meant to provide a reference point at the beginning of this decade regarding matters of consensus and disagreement on how to enact AI Ethics for the good of our institutions, society, and individuals. We have herein identified issues that arise at the intersection of information technology, socially encoded behaviors, and biases, and individual researchers' work and responsibilities. We revisit some of the most pressing problems with AI decision-making and examine the difficult relationships between corporate interests and the early years of AI Ethics research. We propose several possible actions we can take collectively to support researchers throughout the field of AI Ethics, especially those from marginalized groups who may experience even more barriers in speaking out and having their research amplified. We promote the global community of AI Ethics researchers and the evolution of standards accepted in our profession guiding a technological future that makes life better for all.Entities:
Year: 2021 PMID: 34790946 PMCID: PMC8043756 DOI: 10.1007/s43681-021-00052-5
Source DB: PubMed Journal: AI Ethics ISSN: 2730-5953