| Literature DB >> 35431820 |
Leslie M Blaha1, Mitchell Abrams2, Sarah A Bibyk1, Claire Bonial3, Beth M Hartzler4, Christopher D Hsu3, Sangeet Khemlani5, Jayde King1, Robert St Amant3, J Gregory Trafton5, Rachel Wong4.
Abstract
How do we gauge understanding? Tests of understanding, such as Turing's imitation game, are numerous; yet, attempts to achieve a state of understanding are not satisfactory assessments. Intelligent agents designed to pass one test of understanding often fall short of others. Rather than approaching understanding as a system state, in this paper, we argue that understanding is a process that changes over time and experience. The only window into the process is through the lens of natural language. Usefully, failures of understanding reveal breakdowns in the process. We propose a set of natural language-based probes that can be used to map the degree of understanding a human or intelligent system has achieved through combinations of successes and failures.Entities:
Keywords: behavioral measurement; common ground; explainable AI; human-machine teaming; human-robot interaction; mental models; mutual understanding; natural language processing
Year: 2022 PMID: 35431820 PMCID: PMC9008134 DOI: 10.3389/fnsys.2022.800280
Source DB: PubMed Journal: Front Syst Neurosci ISSN: 1662-5137