| Literature DB >> 35161503 |
Max Pascher1,2, Kirill Kronhardt1, Til Franzen1, Uwe Gruenefeld2, Stefan Schneegass2, Jens Gerken1.
Abstract
Nowadays, robots are found in a growing number of areas where they collaborate closely with humans. Enabled by lightweight materials and safety sensors, these cobots are gaining increasing popularity in domestic care, where they support people with physical impairments in their everyday lives. However, when cobots perform actions autonomously, it remains challenging for human collaborators to understand and predict their behavior, which is crucial for achieving trust and user acceptance. One significant aspect of predicting cobot behavior is understanding their perception and comprehending how they "see" the world. To tackle this challenge, we compared three different visualization techniques for Spatial Augmented Reality. All of these communicate cobot perception by visually indicating which objects in the cobot's surrounding have been identified by their sensors. We compared the well-established visualizations Wedge and Halo against our proposed visualization Line in a remote user experiment with participants suffering from physical impairments. In a second remote experiment, we validated these findings with a broader non-specific user base. Our findings show that Line, a lower complexity visualization, results in significantly faster reaction times compared to Halo, and lower task load compared to both Wedge and Halo. Overall, users prefer Line as a more straightforward visualization. In Spatial Augmented Reality, with its known disadvantage of limited projection area size, established off-screen visualizations are not effective in communicating cobot perception and Line presents an easy-to-understand alternative.Entities:
Keywords: cobot; human–robot collaboration; projection; virtual reality; visualization techniques
Mesh:
Year: 2022 PMID: 35161503 PMCID: PMC8838221 DOI: 10.3390/s22030755
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1The compared visualization techniques to communicate the cobot’s perception are (a) Halo; (b) Wedge; and (c) Line.
Figure 2Screenshots of the 3D testbed environment. (a): showing the complete setup with the five items placed on the table; (b): showing the Line visualization with one object on the table not perceived by the cobot.
Figure 3For our work, we compared three different mounting-settings of a projector in the cobot’s workspace to communicate cobot perception. The compared settings are (a) top-mounted; (b) side-mounted; and (c) cobot flanch-mounted projection. The direction of the projection is indicated by an arrow.
Figure 4A detailed overview of (a) the setup highlights the different parts as (1) on-screen object; (2) off-screen object; and (3) the projection area. The (4) main features are highlighted for the selected visualization techniques (b) Halo, (c) Wedge, and (d) Line.
Pairwise comparisons of accuracy for the visualization techniques: Wedge, Halo, and Line.
| Comparison | W | Z |
| r |
|---|---|---|---|---|
| Wedge vs. Halo | 33 | 1.39 | 0.563 | 0.30 |
| Wedge vs. Line | 2 | −2.25 | 0.070 | 0.48 |
| Halo vs. Line | 1 | −2.55 | 0.023 * | 0.54 |
* p ≤ 0.05.
Figure 5Comparison of the reaction times for the three different visualization techniques: Wedge; Halo; and Line.
Pairwise comparisons of reaction times for the visualization techniques: Wedge; Halo; and Line.
| Comparison | W | Z |
| r |
|---|---|---|---|---|
| Wedge vs. Halo | 8 | −2.22 | 0.073 | 0.47 |
| Wedge vs. Line | 59 | 2.31 | 0.056 | 0.49 |
| Halo vs. Line | 64 | 2.76 | 0.009 ** | 0.59 |
** p ≤ 0.01.
Figure 6Comparison of the task load dimensions for the three different visualization techniques: Wedge; Halo; and Line.
Figure 7Comparison of reaction times for the two different visualization techniques: Wedge and Line.
Figure 8Comparison of task load dimensions for the two different visualization techniques: Wedge and Line.