Literature DB >> 28796974

Benchmarking Aided Decision Making in a Signal Detection Task.

Megan L Bartlett1, Jason S McCarley2.   

Abstract

OBJECTIVE: A series of experiments examined human operators' strategies for interacting with highly (93%) reliable automated decision aids in a binary signal detection task.
BACKGROUND: Operators often interact with automated decision aids in a suboptimal way, achieving performance levels lower than predicted by a statistically ideal model of information integration. To better understand operators' inefficient use of decision aids, we compared participants' automation-aided performance levels with the predictions of seven statistical models of collaborative decision making.
METHOD: Participants performed a binary signal detection task that asked them to classify random dot images as either blue or orange dominant. They made their judgments either unaided or with assistance from a 93% reliable automated decision aid that provided either graded (Experiments 1 and 3) or binary (Experiment 2) cues. We compared automation-aided performance with the predictions of seven statistical models of collaborative decision making, including a statistically optimal model and Robinson and Sorkin's contingent criterion model. RESULTS AND
CONCLUSION: Automation-aided sensitivity hewed closest to the predictions of the two least efficient collaborative models, well short of statistically ideal levels. Performance was similar whether the aid provided graded or binary judgments. Model comparisons identified potential strategies by which participants integrated their judgments with the aid's. APPLICATION: Results lend insight into participants' automation-aided decision strategies and provide benchmarks for predicting automation-aided performance levels.

Entities:  

Keywords:  contingent criterion model; decision-making strategies; human–automation interaction; signal detection theory

Mesh:

Year:  2017        PMID: 28796974     DOI: 10.1177/0018720817700258

Source DB:  PubMed          Journal:  Hum Factors        ISSN: 0018-7208            Impact factor:   2.888


  4 in total

1.  Visual search behavior and performance in luggage screening: effects of time pressure, automation aid, and target expectancy.

Authors:  Tobias Rieger; Lydia Heilmann; Dietrich Manzey
Journal:  Cogn Res Princ Implic       Date:  2021-02-25

2.  Adapting to the algorithm: how accuracy comparisons promote the use of a decision aid.

Authors:  Garston Liang; Jennifer F Sloane; Christopher Donkin; Ben R Newell
Journal:  Cogn Res Princ Implic       Date:  2022-02-08

3.  Judging One's Own or Another Person's Responsibility in Interactions With Automation.

Authors:  Nir Douer; Joachim Meyer
Journal:  Hum Factors       Date:  2020-08-04       Impact factor: 2.888

4.  Challenging presumed technological superiority when working with (artificial) colleagues.

Authors:  Tobias Rieger; Eileen Roesler; Dietrich Manzey
Journal:  Sci Rep       Date:  2022-03-08       Impact factor: 4.379

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.