| Literature DB >> 36097023 |
Arsha Ali1, Hebert Azevedo-Sa2,3, Dawn M Tilbury2,4, Lionel P Robert2,5.
Abstract
Effective human-robot collaboration requires the appropriate allocation of indivisible tasks between humans and robots. A task allocation method that appropriately makes use of the unique capabilities of each agent (either a human or a robot) can improve team performance. This paper presents a novel task allocation method for heterogeneous human-robot teams based on artificial trust from a robot that can learn agent capabilities over time and allocate both existing and novel tasks. Tasks are allocated to the agent that maximizes the expected total reward. The expected total reward incorporates trust in the agent to successfully execute the task as well as the task reward and cost associated with using that agent for that task. Trust in an agent is computed from an artificial trust model, where trust is assessed along a capability dimension by comparing the belief in agent capabilities with the task requirements. An agent's capabilities are represented by a belief distribution and learned using stochastic task outcomes. Our task allocation method was simulated for a human-robot dyad. The team total reward of our artificial trust-based task allocation method outperforms other methods both when the human's capabilities are initially unknown and when the human's capabilities belief distribution has converged to the human's actual capabilities. Our task allocation method enables human-robot teams to maximize their joint performance.Entities:
Mesh:
Year: 2022 PMID: 36097023 PMCID: PMC9468009 DOI: 10.1038/s41598-022-19140-5
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Overview of our artificial trust-based task allocation method. In task allocation, each incoming indivisible task must be allocated to and executed by one agent on the human–robot team. An artificial trust-based task allocation method can be used to allocate tasks by considering trust in each agent from the robot’s perspective, cost of each agent, and task reward.
Figure 2Flowchart with the main ideas of our artificial trust-based task allocation method for a team consisting of one human and one robotic agent. The process starts with an incoming task (black dot) defined by a set of task capability requirements. In this case, the incoming task is defined by two capability dimensions. The trust in each agent is computed using the capabilities belief distribution of that agent. The task reward and agent costs are computed using the task requirements. The expected total reward for each agent is computed using trust in the agent, task reward, and agent cost. The agent that maximizes the expected total reward is allocated the task. The outcome of the task is observed as a success or a failure, which is used to update the capabilities belief distribution of the agent that executed the task. The process continues for each incoming task.
Human and robot capabilities for case I (converged or accurate human capabilities) and case II (unconverged or inaccurate human capabilities).
| Case | Agent | ||
|---|---|---|---|
| I | Human | 0.55 | 0.75 |
| Robot | 0.7 | 0.4 |
Figure 3Allocations and outcomes for one sample set of tasks for case I (converged or accurate human capabilities). The outcome, either a success (filled circle) or a failure (unfilled circle), for each task from one sample of tasks as executed by the human (blue) or robot (red) for the ATTA, random, and Tsarouchi et al.[9] methods under case I (converged or accurate human capabilities) is shown. Discarded tasks (black unfilled circle) are failures in Tsarouchi et al.[9]. The human’s actual capabilities (blue asterisk) and the robot’s capabilities (red asterisk) are also shown.
Median and average performance and team total reward for case I (converged or accurate human capabilities) and case II (unconverged or inaccurate human capabilities) (perf. = performance).
| I | ATTA | 80(80, 1.7) | 96(96, 1.3) | 74(74, 2.4) | 47(46, 3.1) |
| Random | |||||
| Tsarouchi | 79(79, 1.5) | ||||
| II | ATTA | 77(77, 2.6) | 95(94, 2.3) | 72(72, 2.9) | 41(42, 4.3) |
| Tsarouchi |
. .
Simulations: Median (Average, Standard Deviation).
Figure 4Allocations and outcomes for one sample set of tasks for case II (unconverged or inaccurate human capabilities). The outcome, either a success (filled circle) or a failure (unfilled circle), for each task from one sample of tasks as executed by the human (blue) or robot (red) for the ATTA and Tsarouchi et al.[9] methods under case II (unconverged or inaccurate human capabilities) is shown. Discarded tasks (black unfilled circle) are failures in Tsarouchi et al.[9] . Unconverged human capabilities are for ATTA and inaccurate human capabilities (blue cross) are for Tsarouchi et al.[9] . The human’s actual capabilities (blue asterisk) and the robot’s capabilities (red asterisk) are also shown.
Figure 5Progression of the human’s capabilities belief distribution. The update in the human’s capabilities belief distribution for (blue solid, blue dashed) and (green solid, green dashed) for one sample converged near the human’s actual (blue asterisk) and (green asterisk) capabilities as task outcomes were observed.
Median and average convergence offset after execution of task for each capability dimension .
| 0.57(0.60, 0.1) | 0.25(0.35, 0.2) | 0.14(0.16, 0.1) | 0.12(0.12, 0.0) | |
| 0.42(0.41, 0.1) | 0.25(0.26, 0.1) | 0.13(0.13, 0.0) | 0.08(0.08, 0.0) |
Simulations: Median (Average, Standard Deviation).
Figure 6Evolution in human trust. The evolution in human trust across the capability hypercube for one sample shows the initial trust distribution () when no task outcomes have been observed to the updated trust distribution () after the capabilities belief converged, which approached a binary value for trust.