| Literature DB >> 34188080 |
Georges Hattab1, Adamantini Hatzipanayioti2,3, Anna Klimova4,5, Micha Pfeiffer1, Peter Klausing1, Michael Breucha1,6, Felix von Bechtolsheim2,6, Jens R Helmert2,7, Jürgen Weitz2,6, Sebastian Pannasch2,7, Stefanie Speidel8,9.
Abstract
Recent technological advances have made Virtual Reality (VR) attractive in both research and real world applications such as training, rehabilitation, and gaming. Although these other fields benefited from VR technology, it remains unclear whether VR contributes to better spatial understanding and training in the context of surgical planning. In this study, we evaluated the use of VR by comparing the recall of spatial information in two learning conditions: a head-mounted display (HMD) and a desktop screen (DT). Specifically, we explored (a) a scene understanding and then (b) a direction estimation task using two 3D models (i.e., a liver and a pyramid). In the scene understanding task, participants had to navigate the rendered the 3D models by means of rotation, zoom and transparency in order to substantially identify the spatial relationships among its internal objects. In the subsequent direction estimation task, participants had to point at a previously identified target object, i.e., internal sphere, on a materialized 3D-printed version of the model using a tracked pointing tool. Results showed that the learning condition (HMD or DT) did not influence participants' memory and confidence ratings of the models. In contrast, the model type, that is, whether the model to be recalled was a liver or a pyramid significantly affected participants' memory about the internal structure of the model. Furthermore, localizing the internal position of the target sphere was also unaffected by participants' previous experience of the model via HMD or DT. Overall, results provide novel insights on the use of VR in a surgical planning scenario and have paramount implications in medical learning by shedding light on the mental model we make to recall spatial structures.Entities:
Year: 2021 PMID: 34188080 PMCID: PMC8241863 DOI: 10.1038/s41598-021-92536-x
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 13D Visualizations. Each row represents a model and each column its visualization along an axis. The latter is displayed according to the following axes, from left to right: +x, +y, x, z. (Top) Liver: portal vein in blue (#4e79a7), hepatic artery in red (#e15759), hepatic vein or venous system in teal (#76b7b2). (Bottom) Pyramid: pyramid tip or pyramidion in teal, internal chambers in red, surface features (i.e., two cracks and entrance) in blue. The common encodings are: surface in light gray (#bab0ac), sphere in yellow (#edc948). To better perceive internal structures, surface opacity is set to 0.5 and the sphere is highlighted by a white circle.
Figure 2Example procedure for HMD-Liv group (). A participant P is assigned to view the Liver model in the HMD condition. (a) P learns how to interact with a demo model. (b) Then when viewing the assigned model, P is told that the sphere is the target. During Task 1: Scene Understanding Task, (c) A Multiple Choice Test (MCT) is used for confidence training. Correct answers to training questions are given and P is asked to reconsider strategy for Confidence Rating (CR). (d) Testing follows without feedback. During Task 2: Direction Estimation Task A Polaris tracking system is used to track P and the model of interest. (e) A tracked pointing tool is used to point at a point of entry (PoE) on the Liver printed model. P approaches the printed model as it is seen in the initial view in the visualization of the model in (b). Upon occlusion, P is asked to review the strategy (training only) (f) In testing, P restarts from the initial view to record all PoE with occlusion feedback only. (g) The demographics part included (Gender, age, etc). (h) Two System Usability Scale (SUS): one for the interaction in the HMD condition in which P interacted with the Liver model visualization, and another for the pointing/tracking system in which P interacted with the printed Liver model. (i) The Big Five Inventory (BFI-K) before P ended the study. To conform the procedure for each P, an internal checklist was used. It can be found at https://github.com/ghattab/user-study/blob/master/materials/study-related/study_checklist_en.pdf.
Linear mixed-effects models for Accuracy and Confidence in Recall of the Scene Understanding Task.
| ine | Accuracy | Confidence | ||||
|---|---|---|---|---|---|---|
| Estimate | SE | Estimate | SE | |||
| ine Intercept | 0.02 | 0.24 | 0.91 | 7.88 | 0.44 | |
| HMD | 0.01 | 0.21 | 0.93 | 0.17 | 0.48 | 0.72 |
| Pyramid | 1.22 | 0.32 | −0.55 | 0.56 | 0.32 | |
| Question 2 | 0.58 | 0.27 | −1.68 | 0.34 | ||
| Question 3 | 0.86 | 0.27 | −3.06 | 0.34 | ||
| Question 4 | −0.79 | 0.27 | −2.72 | 0.34 | ||
| Pyramid: Question 2 | −1.40 | 0.40 | 1.72 | 0.48 | ||
| Pyramid: Question 3 | −0.60 | 0.40 | 0.13 | 2.88 | 0.48 | |
| Pyramid: Question 4 | 0.45 | 0.40 | 0.25 | 1.87 | 0.48 | |
Each dependent measure is modelled as a function of the predictors: Learning Condition (with DT as reference category), type of model (with Liver as the reference category), Question (with Question 1 as a reference category) and their interaction. For each fixed effects and interaction, we report the coefficient and its standard error, along with the associated p-value. Statistically significant predictors (at the level) are in bold.
Multiple linear regression model for Angular Accuracy of the Direction Estimation Task.
| ine | Angular accuracy | ||
|---|---|---|---|
| Estimate | SE | ||
| ine Intercept | 38.63 | 2.16 | |
| Target distance | 0.01 | 0.00 | |
| HMD | 2.36 | 2.22 | 0.29 |
| Pyramid | 14.78 | 2.25 | |
| HMD * Pyramid | −0.92 | 3.20 | 0.77 |
Each dependent measure is modelled as a function of the predictors: Target Distance, Learning Condition (with DT as reference category), the model type (with Liver as the reference category) and their interaction. For each fixed effects and interaction, we report the coefficient and its standard error, along with the associated p-value. Statistically significant predictors (at the level) are in bold.