| Literature DB >> 35877647 |
Mitchell Doughty1,2, Nilesh R Ghugre1,2,3, Graham A Wright1,2,3.
Abstract
We conducted a systematic review of recent literature to understand the current challenges in the use of optical see-through head-mounted displays (OST-HMDs) for augmented reality (AR) assisted surgery. Using Google Scholar, 57 relevant articles from 1 January 2021 through 18 March 2022 were identified. Selected articles were then categorized based on a taxonomy that described the required components of an effective AR-based navigation system: data, processing, overlay, view, and validation. Our findings indicated a focus on orthopedic (n=20) and maxillofacial surgeries (n=8). For preoperative input data, computed tomography (CT) (n=34), and surface rendered models (n=39) were most commonly used to represent image information. Virtual content was commonly directly superimposed with the target site (n=47); this was achieved by surface tracking of fiducials (n=30), external tracking (n=16), or manual placement (n=11). Microsoft HoloLens devices (n=24 in 2021, n=7 in 2022) were the most frequently used OST-HMDs; gestures and/or voice (n=32) served as the preferred interaction paradigm. Though promising system accuracy in the order of 2-5 mm has been demonstrated in phantom models, several human factors and technical challenges-perception, ease of use, context, interaction, and occlusion-remain to be addressed prior to widespread adoption of OST-HMD led surgical navigation.Entities:
Keywords: augmented reality; head-mounted displays; human factors; medical imaging; surgical navigation
Year: 2022 PMID: 35877647 PMCID: PMC9318659 DOI: 10.3390/jimaging8070203
Source DB: PubMed Journal: J Imaging ISSN: 2313-433X
Figure 1The continuum proposed by Milgram and Kishino [1] describing the interactions between reality and virtuality in creating augmented reality experiences. Reproduced without modification from Wikimedia Commons (source: https://commons.wikimedia.org/wiki/File:Virtuality_continuum_2-en.svg, (accessed on 23 March 2022) licensed under: https://creativecommons.org/licenses/by-sa/4.0/deed.en (accessed on accessed on 23 March 2022).
Figure 2A simplified image representation of video see-through (a) and optical see-through (b) head-mounted displays (HMDs). Video see-through HMDs use an opaque video display to present virtual content combined with video of the real world. Real world video is typically captured by front-facing red-green-blue cameras on the front of the HMD. Optical see-through HMDs make use of a transparent optical combiner to merge virtual content, projected into the field of view of the wearer, with a view of the real world.
Figure 3Images of the prominent commercially available optical see-through head-mounted displays. (a) The Google Glass 2 Enterprise Edition with on-board computing (Google, Mountain View, CA, USA). (b) The Microsoft HoloLens 2 headset with on-board computing (Microsoft, Redmond, WA, USA). (c) The Magic Leap 1 headset with a separate computing pad and controllers (Magic Leap, Plantation, FL, USA).
Summary of technical specifications for commercially available optical see-through head-mounted displays.
| Specifications | Google Glass 2 | HoloLens 1 | HoloLens 2 | Magic Leap 1 | Magic Leap 2 |
|---|---|---|---|---|---|
| Optics | Beam Splitter | Waveguide | Waveguide | Waveguide | Waveguide |
| Resolution | |||||
| Field of View |
|
|
|
| |
| Focal Planes | Single Fixed | Single Fixed | Single Fixed | Two Fixed | Single Fixed |
| Computing | On-board | On-board | On-board | External pad | External pad |
| SLAM | 6DoF | 6DoF | 6DoF | 6DoF | 6DoF |
| Eye Tracking | No | No | Yes | Yes | Yes |
| Weight | 46 g | 579 g | 566 g | 345 g | 260 g |
| Design | Glasses-like | Hat-like | Hat-like | Glasses-like | Glasses-like |
| Interaction | Touchpad | Head, hand, voice | Hand, eye, voice | Controller | Eye, controller |
| Release Date | 2019 | 2016 | 2019 | 2018 | 2022 |
| Price | $999 | $3000 | $3500 | $2295 | $3299 |
| Status | Available | Discontinued | Available | Available | Upcoming |
Figure 4Depiction of the fundamentals of marker-based tracking. The intrinsic camera parameters serve to project three-dimensional (3D) content in the camera coordinate system to its two-dimensional (2D) representation on the camera image plane by perspective projection. The extrinsic parameters relate the position and orientation of the world coordinate frame to the camera coordinate frame. Combined, the intrinsic and extrinsic camera parameters allow the relation of 3D points in the world to 2D points on the camera image plane and enable marker-based tracking and precise augmentation of virtual content.
Figure 5Search strategy of our systematic review.
Figure 6Article distribution per year of the conducted literature review from 2014 to 2022 (Q1). We have included the count provided by Birlo et al. for the years 2014 to 2020 [6] in blue with our contributions in green. As the review was conducted in the first quarter of 2022, an estimate of the full-year article publication count was arrived at by multiplying the count by four.
Figure 7A taxonomy of the required components of an effective augmented reality based navigation solution for image guided surgery. Preoperative (preop.) data, such as magnetic resonance imaging (MRI) and computed tomography (CT), along with intraoperative (intraop.) data, such as ultrasound (US) and fluoroscopy (FL), are indicated. In the view category, the display device for visualization of virtual content is included, these could be optical see-through head-mounted displays (OST-HMDs) or video see-through head-mounted displays (VST-HMDs) for example. Our taxonomy is modified from the description provided by Kersten-Oertel et al. [44].
Figure 8Distribution of the 57 articles included in the review based on the surgical speciality discussed.
Papers categorized by the data type they employed. Some articles included a combination of preoperative and intraoperative data.
| Data: Preoperative or Intraoperative | Number of Articles |
|---|---|
|
| |
| Computed Tomography (CT) | 34 |
| Magnetic Resonance Imaging (MRI) | 7 |
| CT and/or MRI | 6 |
| Prerecorded Videos | 1 |
|
| |
| Fluoroscopy | 4 |
| Ultrasound | 3 |
| Telestrations/Virtual Arrows and Annotations | 3 |
| Cone Beam CT | 3 |
| Endoscope Video | 2 |
| Patient Sensors/Monitoring Equipment | 1 |
| Simulated Intraoperative Data | 1 |
Papers categorized by the data processing type used. Some articles included a combination of processing types, such as volume and surface rendered models.
| Processing | Number of Articles |
|---|---|
|
| |
| Surface Models | 39 |
| Planning Information | 8 |
| Raw Data | 5 |
| Volume Models | 4 |
| Printed Models | 3 |
|
| |
| Telestrations | 4 |
| Raw Data | 3 |
Papers categorized by the overlay type used. External trackers (surgical navigation suites) were used in conjunction with an optical see-through head-mounted display to co-locate the headset with relevant tracked surgical tools in frame. We indicate the frequency of commercial tracking system usage and the type of tracking marker used.
| Overlay | Count | Tracking Marker | Count |
|---|---|---|---|
|
|
| ||
| Northern Digital Inc. Polaris | 7 | Retroreflective Spheres | 11 |
| Northern Digital Inc. EM/Aurora | 3 | Electromagnetic | 3 |
| ClaroNav MicronTracker | 2 | Visible | 2 |
| Optitrack | 1 | ||
| Medtronic SteathStation | 1 | ||
|
|
| ||
| HoloLens 1 | 19 | Vuforia | 10 |
| HoloLens 2 | 10 | ArUco | 9 |
| Custom | 3 | Custom | 4 |
| Magic Leap 1 | 1 | Retroreflective Spheres | 2 |
|
| QR-Code | 2 | |
| SPAAM/similar | 2 | Marker-Less | 2 |
| AprilTag | 1 | ||
|
| |||
| Surgeon | 8 | ||
| Other | 3 |
Papers categorized by the view type used. Interaction types included voice (VO), gesture (GE), gaze (GA), keyboard (KB), head pose (HP), pointer (PO), and controller (CNT). Display devices included the HoloLens 1 (HL1), HoloLens 2 (HL2), and Magic Leap One (ML1). Perception location included direct overlay (DO) or adjacent overlay (AO). Not applicable (N/A) methods are indicated.
| View | Interaction | Display Device | Perception Location |
|---|---|---|---|
| Ackermann et al., 2021 [ | N/A | HL1 | DO |
| Cattari et al., 2021 [ | N/A | Custom | DO |
| Condino et al., 2021 [ | N/A | Custom | DO |
| Condino et al., 2021 [ | VO, GE | HL1 | DO |
| Dennler et al., 2021 [ | VO, GE | HL1 | AO |
| Dennler et al., 2021 [ | N/A | HL1 | DO |
| Farshad et al., 2021 [ | VO, GE | HL2 | DO |
| Fick et al., 2021 [ | VO, GE | HL1 | DO |
| Gao et al., 2021 [ | VO | HL1 | DO |
| Gasques et al., 2021 [ | VO, GE, PO | HL1 | DO |
| Gsaxner et al., 2021 [ | N/A | HL2 | DO |
| Gsaxner et al., 2021 [ | GA, GE | HL2 | DO |
| Gu et al., 2021 [ | GA, GE | HL2 | DO |
| Gu et al., 2021 [ | GE | HL1 | DO |
| Heinrich et al., 2021 [ | VO, GE | HL1 | DO |
| Iqbal et al., 2021 [ | N/A | HL1 | AO |
| Ivan et al., 2021 [ | GE | HL1 | DO |
| Ivanov et al., 2021 [ | GE | HL2 | DO |
| Johnson et al., 2021 [ | N/A | ODG R-6 | AO |
| Kimmel et al., 2021 [ | VO, GE | HL1 | AO |
| Kitagawa et al., 2021 [ | N/A | HL2 | AO |
| Kriechling et al., 2021 [ | VO, GE | HL1 | DO |
| Kriechling et al., 2021 [ | VO, GE | HL1 | DO |
| Kunz et al., 2021 [ | GE | HL1 | DO |
| Lee et al., 2021 [ | N/A | HL2 | DO |
| Li et al., 2021 [ | N/A | HL1 | DO |
| Lim et al., 2021 [ | GE | HL2 | DO |
| Lin et al., 2021 [ | GE | ML1 | DO |
| Liu et al., 2021 [ | VO | HL1 | AO |
| Liu et al., 2021 [ | N/A | HL2 | DO |
| Liu et al., 2021 [ | N/A | HL1 | DO |
| Majak et al., 2021 [ | N/A | Moverio BT-200 | DO |
| Qi et al., 2021 [ | GE | HL2 | DO |
| Rai et al., 2021 [ | CNT | ML1 | DO |
| Schlueter-Brust et al., 2021 [ | GE | HL2 | DO |
| Spirig et al., 2021 [ | VO, GE | HL1 | DO |
| Stewart et al., 2021 [ | VO, GE | HL1 | AO |
| Tang et al., 2021 [ | GE | HL1 | AO |
| Tarutani et al., 2021 [ | GE | HL2 | AO |
| Teatini et al., 2021 [ | GE | HL1 | DO |
| Tu et al., 2021 [ | VO, GE | HL2 | DO |
| Velazco-Garcia et al., 2021 [ | VO, GE | HL1 | AO |
| Yanni et al., 2021 [ | CNT | ML1 | DO |
| Zhou et al., 2021 [ | VO, GE | HL1 | DO |
| Carbone et al., 2022 [ | N/A | Custom | DO |
| Doughty et al., 2022 [ | GE | HL2 | DO |
| Frisk et al., 2022 [ | CNT | ML1 | DO |
| Hu et al., 2022 [ | KB | HL1 | DO |
| Johnson et al., 2022 [ | VC | HL2 | DO |
| Ma et al., 2022 [ | HP | HL1 | DO |
| Nguyen et al., 2022 [ | VO | HL1 | DO |
| Puladi et al., 2022 [ | GE | HL1 | DO |
| Tu et al., 2022 [ | GE | HL2 | DO |
| Uhl et al., 2022 [ | CNT | ML1 | DO |
| Von Atzigen et al., 2022 [ | VO | HL1 | DO |
| Yang et al., 2022 [ | VO, GE | HL1 | DO |
| Zhang et al., 2022 [ | N/A | HL1 | DO |
Figure 9Distribution of the 57 articles included in the review based on the display device used.
Figure 10Demonstration of the difference between direct overlay of virtual content and adjacent overlay of virtual content. Direct overlay will include virtual content which is directly superimposed with the patient anatomy (a). Adjacent overlay involves the placement of virtual content next to the patient to improve data accessibility (b).
Papers categorized by the validation type employed. Models for evaluation included phantom models (PHA), cadaver models (CAD), animals (ANI), and patients (PAT). Human factors considerations included risk of error (ROE), spatial awareness (SPWR), ease of use (EOU), perception (PER), ergonomics (ERGO), attention shift (ATS), interaction challenges (INT), mental mapping (MM), context (CTXT), hand-eye coordination (HE), and occlusion (OCCL). Not applicable (N/A) methods are indicated.
| Validation | Evaluation | Accuracy | Human Factors |
|---|---|---|---|
| Ackermann et al., 2021 [ | CAD | ROE, SPWR, EOU | |
| Cattari et al., 2021 [ | PHA | PER, ERGO, ATS, MM, CTXT | |
| Condino et al., 2021 [ | PHA | ATS, PER, SPWR, CTXT, INT | |
| Condino et al., 2021 [ | PHA, PAT | N/A | SPWR, MM, PER |
| Dennler et al., 2021 [ | PAT | N/A | ERGO, SPWR, ATS, PER |
| Dennler et al., 2021 [ | PHA | ATS, ROE | |
| Farshad et al., 2021 [ | PAT | ATS, ROE, ERGO, PER | |
| Fick et al., 2021 [ | PAT | MM, ATS, SPWR | |
| Gao et al., 2021 [ | PHA |
| SPWR, MM, PER |
| Gasques et al., 2021 [ | PHA, CAD | N/A | ROE, PER, CTXT |
| Gsaxner et al., 2021 [ | PHA | PER, INT, CTXT, SPWR | |
| Gsaxner et al., 2021 [ | PHA | N/A | EOU, PER, MM |
| Gu et al., 2021 [ | PHA | PER, MM, SPWR, OCCL, CTXT | |
| Gu et al., 2021 [ | PHA | OCCL, PER | |
| Heinrich et al., 2021 [ | PHA | N/A | PER, HE |
| Iqbal et al., 2021 [ | PAT | Surface Roughness | EOU, ERGO, PER, MM, SPWR |
| Ivan et al., 2021 [ | PAT | Trace Overlap | ERGO, SPWR |
| Ivanov et al., 2021 [ | PAT | MM, PER | |
| Johnson et al., 2021 [ | PHA | N/A | ERGO, EOU |
| Kimmel et al., 2021 [ | PAT | N/A | CTXT |
| Kitagawa et al., 2021 [ | PAT | N/A | SPWR, EOU |
| Kriechling et al., 2021 [ | CAD | N/A | |
| Kriechling et al., 2021 [ | CAD | N/A | |
| Kunz et al., 2021 [ | PHA | CTXT, PER, ERGO | |
| Lee et al., 2021 [ | PHA | N/A | PER, INT, SPWR |
| Li et al., 2021 [ | PHA, ANI | PER, INT, SPWR | |
| Lim et al., 2021 [ | PHA | N/A | N/A |
| Lin et al., 2021 [ | PHA | CTXT | |
| Liu et al., 2021 [ | PAT | SPWR, MM | |
| Liu et al., 2021 [ | PAT | Radiation Exposure | ERGO, EOU |
| Liu et al., 2021 [ | PHA | CTXT, PER | |
| Majak et al., 2021 [ | PHA | MM, ATS, SPWR | |
| Qi et al., 2021 [ | PAT | MM, SPWR, ROE | |
| Rai et al., 2021 [ | PAT | N/A | SPWR, EOU |
| Schlueter-Brust et al., 2021 [ | PHA | 3 mm | OCCL, PER |
| Spirig et al., 2021 [ | CAD | MM, ATS, SPWR | |
| Stewart et al., 2021 [ | PHA | N/A | ATS, ERGO |
| Tang et al., 2021 [ | PAT | N/A | HE, SPWR, MM, PER |
| Tarutani et al., 2021 [ | PHA | ROE, SPWR | |
| Teatini et al., 2021 [ | PHA | SPWR, MM, PER, HE, ROE | |
| Tu et al., 2021 [ | PHA, CAD | MM, OCCL, HE, PER, ERGO | |
| Velazco-Garcia et al., 2021 [ | PHA | N/A | MM, CTXT, SPWR |
| Yanni et al., 2021 [ | PHA | N/A | ERGO, MM, PER |
| Zhou et al., 2021 [ | PHA, ANI | OCCL, INT, ROE | |
| Carbone et al., 2021 [ | PHA, PAT | ROE, OCCL, PER, ERGO | |
| Doughty et al., 2022 [ | PHA, ANI | PER, MM, OCCL, CTXT, ATS | |
| Frisk et al., 2022 [ | PHA | MM, ATS | |
| Hu et al., 2022 [ | PHA | OCCL, PER | |
| Johnson et al., 2022 [ | PHA | PER, MM, ERGO | |
| Ma et al., 2022 [ | PHA | N/A | OCCL, ERGO, EOU |
| Nguyen et al., 2022 [ | PHA | N/A | HE, MM |
| Puladi et al., 2022 [ | CAD | MM, SPWR, PER, OCCL | |
| Tu et al., 2022 [ | PHA | MM, HE, PER, ROE | |
| Uhl et al., 2022 [ | PHA | MM, ATS | |
| Von Atzigen et al., 2022 [ | PHA | ATS | |
| Yang et al., 2022 [ | PAT | SPWR, MM | |
| Zhang et al., 2022 [ | CAD | MM, ROE |
Figure 11Distribution of the 57 articles included in the review based on the reported human factors related considerations. Human factors considerations included risk of error (ROE), spatial awareness (SPWR), ease of use (EOU), perception (PER), ergonomics (ERGO), attention shift (ATS), interaction challenges (INT), mental mapping (MM), context (CTXT), hand-eye coordination (HE), and occlusion (OCCL).