| Literature DB >> 35669290 |
Summer Rebensky1, Kendall Carmody2, Cherrise Ficke2, Meredith Carroll2, Winston Bennett3.
Abstract
Human-agent teaming (HAT) is becoming more commonplace across industry, military, and consumer settings. Agents are becoming more advanced, more integrated, and more responsible for tasks previously assigned to humans. In addition, the dyadic human-agent teaming nature is evolving from a one-one pair to one-many, in which the human is working with numerous agents to accomplish a task. As capabilities become more advanced and humanlike, the best method for humans and agents to effectively coordinate is still unknown. Therefore, current research must start diverting focus from how many agents can a human manage to how can agents and humans work together effectively. Levels of autonomy (LOAs), or varying levels of responsibility given to the agents, implemented specifically in the decision-making process could potentially address some of the issues related to workload, stress, performance, and trust. This study sought to explore the effects of different LOAs on human-machine team coordination, performance, trust, and decision making in hand with assessments of operator workload and stress in a simulated multi-unmanned aircraft vehicle (UAV) intelligence surveillance and reconnaissance (ISR) task. The results of the study can be used to identify human factor roadblocks to effective HAT and provide guidance for future designs of HAT. Additionally, the unique impacts of LOA and autonomous decision making by agents on trust are explored.Entities:
Keywords: autonomous decision making; distributed teams; human agent teaming; level of autonomy; multi-agent teaming
Year: 2022 PMID: 35669290 PMCID: PMC9164219 DOI: 10.3389/frobt.2022.782134
Source DB: PubMed Journal: Front Robot AI ISSN: 2296-9144
Sheridan and Verplank’s (1978) LOA structured model.
| Level of automation | Definition |
|---|---|
| 1 | Automation offers no assistance, humans must do it all |
| 2 | The computer offers a complete set of action alternative and nice |
| 3 | Narrows the selection down to a few, or |
| 4 | Suggests one, and |
| 5 | Executes that suggestion if the human approves, or |
| 6 | Allows the human a restricted time to veto before automatic execution, or |
| 7 | Executes automatically, then necessarily informs the human, or |
| 8 | Informs the human after execution only if asked, or |
| 9 | Informs the human after execution if the automation decides to |
| 10 | The automation decides everything and acts autonomously, ignoring the human |
Level of autonomy conditions.
| Level of autonomy | Level from | Agent responsibilities | Human involvement |
|---|---|---|---|
| Manual | 1 | Detects objects but does not offer any assistance with identification | Determines if the object is a friendly target, neutral target, or enemy target to update the mission map |
| Computer offers no assistance | |||
| Advice | 4 | Detects objects and offers suggestion on potential target type | Reviews agent suggestion and determines if the object is a friendly target, neutral target, or enemy target to update the mission map |
| Suggests one | |||
| Consent | 5 | Detects objects and marks target type | Reviews agent mark and either confirms or changes the agent’s decision |
| Execute automatically if human approves | |||
| Veto | 7 | Detects objects and marks target type | Can review the agent’s decision and change if needed |
| Executes and then informs human |
FIGURE 1Experimental conditions.
FIGURE 2Experimental testbed.
Mission performance scores by condition.
| Construct | Captured by | Average scores by condition | |||
|---|---|---|---|---|---|
| Manual | Advice | Consent | Veto | ||
| Performance |
| 84.27% | 85.00% | 89.10% | 88.54% |
| Stress |
| 10.12 | 10.58 | 7.43 | 8.04 |
| Workload |
| 61.17 | 60.78 | 54.04 | 52.56 |
FIGURE 3Mislabels corrected and missing targets found by condition.
FIGURE 4Average trust and distrust ratings by condition.
FIGURE 5Average team coordination and mission effectiveness scores by condition.