Poornima Madhavan1, Douglas A Wiegmann, Frank C Lacson. 1. Department of Social and Decision Sciences, Porter Hall 208-J, Carnegie Mellon University, Pittsburgh, PA 15213, USA. madhavan@andrew.cmu.edu
Abstract
OBJECTIVE: We tested the hypothesis that automation errors on tasks easily performed by humans undermine trust in automation. BACKGROUND: Research has revealed that the reliability of imperfect automation is frequently misperceived. We examined the manner in which the easiness and type of imperfect automation errors affect trust and dependence. METHOD: Participants performed a target detection task utilizing an automated aid. In Study 1, the aid missed targets either on easy trials (easy miss group) or on difficult trials (difficult miss group). In Study 2, we manipulated both easiness and type of error (miss vs. false alarm). The aid erred on either difficult trials alone (difficult errors group) or on difficult and easy trials (easy miss group; easy false alarm group). RESULTS: In both experiments, easy errors led to participants mistrusting and disagreeing more with the aid on difficult trials, as compared with those using aids that generated only difficult errors. This resulted in a downward shift in decision criterion for the former, leading to poorer overall performance. Misses and false alarms led to similar effects. CONCLUSION: Automation errors on tasks that appear "easy" to the operator severely degrade trust and reliance. APPLICATION: Potential applications include the implementation of system design solutions that circumvent the negative effects of easy automation errors.
OBJECTIVE: We tested the hypothesis that automation errors on tasks easily performed by humans undermine trust in automation. BACKGROUND: Research has revealed that the reliability of imperfect automation is frequently misperceived. We examined the manner in which the easiness and type of imperfect automation errors affect trust and dependence. METHOD:Participants performed a target detection task utilizing an automated aid. In Study 1, the aid missed targets either on easy trials (easy miss group) or on difficult trials (difficult miss group). In Study 2, we manipulated both easiness and type of error (miss vs. false alarm). The aid erred on either difficult trials alone (difficult errors group) or on difficult and easy trials (easy miss group; easy false alarm group). RESULTS: In both experiments, easy errors led to participants mistrusting and disagreeing more with the aid on difficult trials, as compared with those using aids that generated only difficult errors. This resulted in a downward shift in decision criterion for the former, leading to poorer overall performance. Misses and false alarms led to similar effects. CONCLUSION:Automation errors on tasks that appear "easy" to the operator severely degrade trust and reliance. APPLICATION: Potential applications include the implementation of system design solutions that circumvent the negative effects of easy automation errors.
Authors: Ewart J de Visser; Samuel S Monfort; Kimberly Goodyear; Li Lu; Martin O'Hara; Mary R Lee; Raja Parasuraman; Frank Krueger Journal: Hum Factors Date: 2017-02 Impact factor: 2.888
Authors: Douglas A Wiegmann; Ashley A Eggman; Andrew W Elbardissi; Sarah Henrickson Parker; Thoralf M Sundt Journal: Appl Ergon Date: 2010-03-03 Impact factor: 3.661
Authors: Ewart J de Visser; Paul J Beatty; Justin R Estepp; Spencer Kohn; Abdulaziz Abubshait; John R Fedota; Craig G McDonald Journal: Front Hum Neurosci Date: 2018-08-10 Impact factor: 3.169