Literature DB >> 26867556

Intelligent Agent Transparency in Human-Agent Teaming for Multi-UxV Management.

Joseph E Mercado1, Michael A Rupp2, Jessie Y C Chen3, Michael J Barnes4, Daniel Barber3, Katelyn Procci3.   

Abstract

OBJECTIVE: We investigated the effects of level of agent transparency on operator performance, trust, and workload in a context of human-agent teaming for multirobot management.
BACKGROUND: Participants played the role of a heterogeneous unmanned vehicle (UxV) operator and were instructed to complete various missions by giving orders to UxVs through a computer interface. An intelligent agent (IA) assisted the participant by recommending two plans-a top recommendation and a secondary recommendation-for every mission.
METHOD: A within-subjects design with three levels of agent transparency was employed in the present experiment. There were eight missions in each of three experimental blocks, grouped by level of transparency. During each experimental block, the IA was incorrect three out of eight times due to external information (e.g., commander's intent and intelligence). Operator performance, trust, workload, and usability data were collected.
RESULTS: Results indicate that operator performance, trust, and perceived usability increased as a function of transparency level. Subjective and objective workload data indicate that participants' workload did not increase as a function of transparency. Furthermore, response time did not increase as a function of transparency.
CONCLUSION: Unlike previous research, which showed that increased transparency resulted in increased performance and trust calibration at the cost of greater workload and longer response time, our results support the benefits of transparency for performance effectiveness without additional costs. APPLICATION: The current results will facilitate the implementation of IAs in military settings and will provide useful data to the design of heterogeneous UxV teams.
© 2016, Human Factors and Ergonomics Society.

Entities:  

Keywords:  human–agent teaming; intelligent agent transparency; multi-UxV management

Mesh:

Year:  2016        PMID: 26867556     DOI: 10.1177/0018720815621206

Source DB:  PubMed          Journal:  Hum Factors        ISSN: 0018-7208            Impact factor:   2.888


  8 in total

1.  A Methodology for Evaluating Operator Usage of Machine Learning Recommendations for Power Grid Contingency Analysis.

Authors:  John Wenskovitch; Brett Jefferson; Alexander Anderson; Jessica Baweja; Danielle Ciesielski; Corey Fallon
Journal:  Front Big Data       Date:  2022-06-14

2.  Design of Proactive Interaction for In-Vehicle Robots Based on Transparency.

Authors:  Jianmin Wang; Tianyang Yue; Yujia Liu; Yuxi Wang; Chengji Wang; Fei Yan; Fang You
Journal:  Sensors (Basel)       Date:  2022-05-20       Impact factor: 3.847

Review 3.  Human-Autonomy Teaming: A Review and Analysis of the Empirical Literature.

Authors:  Thomas O'Neill; Nathan McNeese; Amy Barron; Beau Schelble
Journal:  Hum Factors       Date:  2020-10-22       Impact factor: 3.598

4.  Exploring the influence of a user-specific explainable virtual advisor on health behaviour change intentions.

Authors:  Amal Abdulrahman; Deborah Richards; Ayse Aysin Bilgin
Journal:  Auton Agent Multi Agent Syst       Date:  2022-04-04       Impact factor: 2.475

5.  The influence of interdependence and a transparent or explainable communication style on human-robot teamwork.

Authors:  Ruben S Verhagen; Mark A Neerincx; Myrthe L Tielman
Journal:  Front Robot AI       Date:  2022-09-08

6.  Inferring Trust From Users' Behaviours; Agents' Predictability Positively Affects Trust, Task Performance and Cognitive Load in Human-Agent Real-Time Collaboration.

Authors:  Sylvain Daronnat; Leif Azzopardi; Martin Halvey; Mateusz Dubiel
Journal:  Front Robot AI       Date:  2021-07-08

7.  Learning From the Slips of Others: Neural Correlates of Trust in Automated Agents.

Authors:  Ewart J de Visser; Paul J Beatty; Justin R Estepp; Spencer Kohn; Abdulaziz Abubshait; John R Fedota; Craig G McDonald
Journal:  Front Hum Neurosci       Date:  2018-08-10       Impact factor: 3.169

8.  Adaptive trust calibration for human-AI collaboration.

Authors:  Kazuo Okamura; Seiji Yamada
Journal:  PLoS One       Date:  2020-02-21       Impact factor: 3.240

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.