| Literature DB >> 36185064 |
Edward Dieterle1, Chris Dede2, Michael Walker3.
Abstract
Our synthetic review of the relevant and related literatures on the ethics and effects of using AI in education reveals five qualitatively distinct and interrelated divides associated with access, representation, algorithms, interpretations, and citizenship. We open our analysis by probing the ethical effects of algorithms and how teams of humans can plan for and mitigate bias when using AI tools and techniques to model and inform instructional decisions and predict learning outcomes. We then analyze the upstream divides that feed into and fuel the algorithmic divide, first investigating access (who does and does not have access to the hardware, software, and connectivity necessary to engage with AI-enhanced digital learning tools and platforms) and then representation (the factors making data either representative of the total population or over-representative of a subpopulation's preferences, thereby preventing objectivity and biasing understandings and outcomes). After that, we analyze the divides that are downstream of the algorithmic divide associated with interpretation (how learners, educators, and others understand the outputs of algorithms and use them to make decisions) and citizenship (how the other divides accumulate to impact interpretations of data by learners, educators, and others, in turn influencing behaviors and, over time, skills, culture, economic, health, and civic outcomes). At present, lacking ongoing reflection and action by learners, educators, educational leaders, designers, scholars, and policymakers, the five divides collectively create a vicious cycle and perpetuate structural biases in teaching and learning. However, increasing human responsibility and control over these divides can create a virtuous cycle that improves diversity, equity, and inclusion in education. We conclude the article by looking forward and discussing ways to increase educational opportunity and effectiveness for all by mitigating bias through a cycle of progressive improvement. © Educational Testing Service, under exclusive license to Springer-Verlag London Ltd., part of Springer Nature 2022, Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.Entities:
Keywords: Artificial intelligence; Education; Equity; Ethics
Year: 2022 PMID: 36185064 PMCID: PMC9513289 DOI: 10.1007/s00146-022-01497-w
Source DB: PubMed Journal: AI Soc ISSN: 0951-5666
Fig. 1The cyclical effects of using artificial intelligence in education
Fig. 2The upstream and downstream effects of ethical decisions involving education and artificial intelligence
Fig. 3Daily K–12 student usage of educational technologies from February through December 2020 by more affluent and less affluent U.S. School Districts. Note. LearnPlatform (2021) EdTech Engagement & Digital Learning Equity Gaps. Reprinted with permission by K. Rectanus, September 7, 2021