Literature DB >> 36248169

Visualization, navigation, augmentation. The ever-changing perspective of the neurosurgeon.

A Boaro1, F Moscolo1, A Feletti1, G M V Polizzi1, S Nunes1, F Siddi2, M L D Broekman2,3, F Sala1.   

Abstract

Introduction: The evolution of neurosurgery coincides with the evolution of visualization and navigation. Augmented reality technologies, with their ability to bring digital information into the real environment, have the potential to provide a new, revolutionary perspective to the neurosurgeon. Research question: To provide an overview on the historical and technical aspects of visualization and navigation in neurosurgery, and to provide a systematic review on augmented reality (AR) applications in neurosurgery. Material and methods: We provided an overview on the main historical milestones and technical features of visualization and navigation tools in neurosurgery. We systematically searched PubMed and Scopus databases for AR applications in neurosurgery and specifically discussed their relationship with current visualization and navigation systems, as well as main limitations.
Results: The evolution of visualization in neurosurgery is embodied by four magnification systems: surgical loupes, endoscope, surgical microscope and more recently the exoscope, each presenting independent features in terms of magnification capabilities, eye-hand coordination and the possibility to implement additional functions. In regard to navigation, two independent systems have been developed: the frame-based and the frame-less systems. The most frequent application setting for AR is brain surgery (71.6%), specifically neuro-oncology (36.2%) and microscope-based (29.2%), even though in the majority of cases AR applications presented their own visualization supports (66%). Discussion and conclusions: The evolution of visualization and navigation in neurosurgery allowed for the development of more precise instruments; the development and clinical validation of AR applications, have the potential to be the next breakthrough, making surgeries safer, as well as improving surgical experience and reducing costs.
© 2022 The Authors.

Entities:  

Keywords:  Augmented neurosurgery; Augmented reality; Augmented surgery; History of neurosurgery; Innovation in neurosurgery

Year:  2022        PMID: 36248169      PMCID: PMC9560703          DOI: 10.1016/j.bas.2022.100926

Source DB:  PubMed          Journal:  Brain Spine        ISSN: 2772-5294


Introduction

Surgical visualization and navigation are important for neurosurgery due to the need to safely access small and deep surgical fields and to manipulate delicate neurovascular structures. In an ever evolving technological progress, two types of tools have been developed: some to improve visualization such as the microscope, endoscope, surgical loupes and the most recent exoscope (Di Ieva et al., 2014; Ricciardi et al., 2019; Uluç et al., 2009); and others which allow a better orientation, such as imaging techniques, navigation and monitoring systems (Enchev, 2009; Sala, 2010). Augmented Reality (AR) – the technology that enables the projection of digital content into the physical environment – can be seen as a combination of both types of tools. As such, it has the potential to provide an ideal surgical environment centered around the surgeon and the patient, improving safety and efficacy, empowering training and reducing costs (Cho et al., 2020; Meola et al., 2017). In order to better comprehend how AR may further advance current neurosurgical practice it is useful to design a map of the evolution of visualization and navigation in neurosurgery and provide a systematic overview on the current AR applications in neurosurgery.

The evolution of surgical visualization

The study of light and optics began centuries ago involving various cultures in many geographical areas: from ancient Egyptians and Hebrews to Arab scientists who completed the first rational studies on vision and light, to the development of the first lenses in 13th and 16th centuries in Europe (Kalderon, 1983). The application of this knowledge to develop visualization tools for surgical operations is not much older than modern neurosurgery itself, with the first prototypes of surgical microscopes, loupes and endoscopes only materializing during the second half of the 19th century (Di Ieva et al., 2014; Uluç et al., 2009).

Operative microscope

The first use of the operative microscope by a neurosurgeon was reported in 1957 ​at University of Southern California, when Theodore Kurze used it to remove a neurilemma of the VII cranial nerve from a 5-year-old child (Uluç et al., 2009). At that time, the microscope was not conceptually different from today, with two oculars, a slit lamp and a multi-axial system for movement (Uluç et al., 2009). The main limits of those initial systems were that only one surgeon could view the surgical field, that all the movements were manually operated and there were difficulties regarding sterilization (Uluç et al., 2009). In the following 20 years these technical challenges were overcome, thus significantly improving visualization and advancing the neurosurgical field (Fig. 1). More recently, sophisticated imaging capabilities have been developed, as light filters are applied to the microscope lenses, allowing the intraoperative visualization of blood flow in the vessels (indocyanine green) or the presence of tumor lesions (sodium fluorescein or 5-aminolevulinic acid for glioma malignant tissue). The use of the microscope requires a certain level of eye-hand coordination, because whilst the surgeon looks into the oculars in front of him, his/her hands are directed in a different direction (Fig. 2A) (Table 1). The most recent technological advancement in terms of maneuverability consists in the robotic operative microscope, in which a robotized navigation system is implemented allowing assistance to the surgeon in memorizing specific targets in space where the surgeon might want to return along the operation or the ability to lock the focus on a target while at the same time allowing surgeon-directed movement of the microscope around it (Belykh et al., 2018). Microscopic vision has also been successfully combined with the endoscopic view in the setting of endoscope-assisted microneurosurgery (EAM), which found interesting applications in vascular and oncological surgery. Specifically, the 0° endoscope proved to be useful in better visualizing the regional anatomy thanks to the better magnification and illumination, whereas the 30° endoscope allowed to better explore the surrounding vascular anatomy (López et al., 2021; Shao et al., 2022).
Fig. 1

Visualization and navigation in neurosurgery timeline. Visualization. The blue boxes represent the timepoints of the first development of each visualization tool or underlying technology. In gray are represented early and current examples of each visualization modality. Specifically, for surgical loupes, an example of loupes with 2.5x magnifying lenses; for the endoscope, on the left Nitze's operative endoscope (1891) (Boyaci et al., 2020), at the center an example of a modern rigid endoscope, on the right an example of a modern flexible endoscope; for the microscope, on the left an example of the Zeiss diploscope (1964) (Edström et al., 2020c), on the right an example of a modern robotic microscope; for the exoscope, a picture of the exoscopic camera and its base. Navigation. The two main, currently used, navigation systems (frame-based and frameless) with their first development timepoints (yellow boxes) and visual examples are represented. The development of Augmented Reality (AR) applications has the potential to bring together the strengths of both visualization and navigation technologies. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)

Fig. 2

Currently available intraoperative visualization settings. In each image the red dashed line represents the line of sight of the surgeon, the blue lines represent the working directions, the crossed square represents the surgical field, and the green diamond represents the location for potential visualization of augmented reality information. A. Microscope. The surgeon looks into the oculars and obtains stereoptic view of the surgical field. Line of sight and working directions do not directly converge on the field, requiring the need for eye-hand coordination adjustments. B. Endoscope. The surgeon looks into a two-dimensional screen to perform the surgery. Line of sight and working directions do not directly converge on the field, requiring the need for eye-hand coordination adjustments. In this case one hand (short blue arrow) might be needed to hold the optic. Stereoptic view is not implemented by default. Shifting between endoscopic and macroscopic view is impossible. C. Surgical loupes. The surgeon looks directly into the surgical field through the lenses of the loupes. Line of sight and working directions directly converge on the field. Stereoptic view is implemented by default. Shifting between magnified vies and macroscopic view is easy. No default connection to screen. D. Exoscope. The surgeon looks into a two-dimensional screen to perform the surgery. Line of sight and working directions do not directly converge on the field, requiring the need for eye-hand coordination adjustments. Shifting between exoscopic and macroscopic view is easy. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)

Table 1

Visualization tools in neurosurgery and key features.

StereopsisMagnificationNeed to move focus from surgical field for navigation purposesPossibility to provide view for multiple surgeonsDifficulty of shifting between magnified and normal viewPossibility to cause direct damage
Microscopeby defaultUp to ∼50xYesYesDifficultMild, heat-related
EndoscopepossibleDepth of focus 2–50 ​mmYesYesImpossibleYes
Surgical loupesby default2.5–6xYesNoEasyNo
ExoscopepossibleUp to ∼50xNoYesEasyNo
Visualization and navigation in neurosurgery timeline. Visualization. The blue boxes represent the timepoints of the first development of each visualization tool or underlying technology. In gray are represented early and current examples of each visualization modality. Specifically, for surgical loupes, an example of loupes with 2.5x magnifying lenses; for the endoscope, on the left Nitze's operative endoscope (1891) (Boyaci et al., 2020), at the center an example of a modern rigid endoscope, on the right an example of a modern flexible endoscope; for the microscope, on the left an example of the Zeiss diploscope (1964) (Edström et al., 2020c), on the right an example of a modern robotic microscope; for the exoscope, a picture of the exoscopic camera and its base. Navigation. The two main, currently used, navigation systems (frame-based and frameless) with their first development timepoints (yellow boxes) and visual examples are represented. The development of Augmented Reality (AR) applications has the potential to bring together the strengths of both visualization and navigation technologies. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) Currently available intraoperative visualization settings. In each image the red dashed line represents the line of sight of the surgeon, the blue lines represent the working directions, the crossed square represents the surgical field, and the green diamond represents the location for potential visualization of augmented reality information. A. Microscope. The surgeon looks into the oculars and obtains stereoptic view of the surgical field. Line of sight and working directions do not directly converge on the field, requiring the need for eye-hand coordination adjustments. B. Endoscope. The surgeon looks into a two-dimensional screen to perform the surgery. Line of sight and working directions do not directly converge on the field, requiring the need for eye-hand coordination adjustments. In this case one hand (short blue arrow) might be needed to hold the optic. Stereoptic view is not implemented by default. Shifting between endoscopic and macroscopic view is impossible. C. Surgical loupes. The surgeon looks directly into the surgical field through the lenses of the loupes. Line of sight and working directions directly converge on the field. Stereoptic view is implemented by default. Shifting between magnified vies and macroscopic view is easy. No default connection to screen. D. Exoscope. The surgeon looks into a two-dimensional screen to perform the surgery. Line of sight and working directions do not directly converge on the field, requiring the need for eye-hand coordination adjustments. Shifting between exoscopic and macroscopic view is easy. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) Visualization tools in neurosurgery and key features.

Endoscope

The first endoscopes used light sources such as candles or kerosene lamps. These increased the risk of burns and produced smoke, even though some rudimentary cooling systems were put in place (Editorial. Urology, 1998; Zada et al., 2013). The situation changed with the introduction of Thomas Edison's carbon filament (1879), which provided better illumination with no need of a cooling system (Di Ieva et al., 2014; Zada et al., 2013). Initially, the visualization system was based on direct visualization of structures but the introduction of multiple lenses systems allowed a significant improvement in the image quality (Hounsfield, 1995; Zada et al., 2013). The beginning of the 20th century saw the birth of other ingenious innovations, from electrocautery and coagulation to irrigation systems that could be applied to the narrow working channel (Hounsfield, 1995; Zada et al., 2013). Nonetheless, even with these milestones as Dandy's successful choroid plexus resection, the endoscope had not reached widespread adoption in the neurosurgical community, due to the heavy weight, the still suboptimal lighting and optics as well as the concomitant introduction of different surgical techniques (Di Ieva et al., 2014; Hsu et al., 2009). A new interest aroused in the mid-20th century with Harold Hopkins, who launched two outstanding developments which would have changed neuroendoscopy forever (Cockett and Cockett, 1998; Hsu et al., 2009). He introduced the rod-lens system, allowing for a higher refractive index, and developed the flexible fiberscope with an image-transmission system based on glass fibers and an external light source (Fig. 1). (Cockett and Cockett, 1998; Kapany, 2006) An important advantage of endoscopes compared to microscopes in terms of visualization capabilities, relies on the possibility to ‘look around the angles’ thanks to the possibility to advance and retract the tube and to have angled or flexible optics. On the other hand, the endoscope can be blinded by blood, and it is the only visualization system that can injure the surrounding structures. The technological advancement allowed the equipment of better light sources, dedicated light filters and dyes, video cameras on the tip of the scope (chip-on-tip) as well as the connection to screens and archiving options (Di Ieva et al., 2014; Fiorindi et al., 2017; Longatti et al., 2020; Zada et al., 2013). It is interesting to consider that the maximization of the surgical experience requires the endoscopist's focus to move away from the surgical field towards a screen, at the cost of an additional adjustment of the eye-hand coordination (Fig. 2B) (Table 1). The most significant drawback compared to microscope and loupes, resides on the bidimensional image provided, an issue which was quite recently undertaken through the development of three-dimensional endoscopic system (Barkhoudarian et al., 2013).

Surgical loupes

Loupes are the surgical evolution of the original idea of spectacles, which were initially developed in Italy between the 13th and 14th century and later evolved into correction eyeglasses as well as the magnification lenses (Uluç et al., 2009). It was only in the second half of the 19th century that the combination of these two elements gave birth to the first prototype of surgical loupes (Uluç et al., 2009). The first prototype was fixed to a headband but was quite heavy and did not become a wide success. Later, at the beginning of the 20th century, lighter versions equipped with a headlight were developed and proved to be very popular (Uluç et al., 2009). Surgical loupes continued to evolve reducing weight, improving images and connecting cameras for recording (Fig. 1). Surgical loupes present some clear advantages in that they are light, cheap, easily transported, and each surgeon may customize their own pair; on the other hand, illumination and magnification options are significantly limited compared to microscopes. What is peculiar and noteworthy about surgical loupes is that they are the only surgical visualization tool that keeps the eyes of the surgeon on the actual surgical field, without the need for eye-hand coordination adjustment (Fig. 2C) (Table 1).

Exoscope

The most recent evolution in terms of visualization tools is the exoscope (Ricciardi et al., 2019). It was designed and developed as a hybrid visualization solution between the endoscope and the microscope, with an effort to overcome the shortcomings of both (Abramovic et al., 2022; Calloni et al., 2022; Piquer et al., 2014; Ricciardi et al., 2019; Rösler et al., 2022). While the first prototypes and clinical applications date back more than a decade, the first spark of interest was not followed by widespread adoption (Fig. 3B) mainly due to limitations in requiring frequent repositioning and lack of stereopsis, as discussed below (Mamelak et al., 2008, 2010). It consists of a rigid lens telescope equipped with a camera for image acquisition and a light source; it is held in place by a multi-axial arms system. It is similar to an endoscope in terms of optics and the fact that the surgeon views the images on a screen, which can either be 2D or 3D, however the image-acquisition system is located outside the surgical cavity while still allowing high quality lighting and magnification, as with the microscope (Figs. 1 and 2D) (Table 1). Stereopsis is still considered one of the main predicaments with this solution, which was reported could be overcome through practice or the application of 3D visualization solutions (Mamelak et al., 2008, 2010). Some of the main advantages compared to the microscope include the improved depth of field and ability to include and integrate additional digital information, along with perceived improvement in the surgeon's comfort and the reduced space occupied in the operating room. The possibility to apply semi-robotized arms has also contributed to overcome the difficulties related to the need for frequent repositioning which brought the exoscope to the microscope's standard in terms of maneuverability. One of the biggest advantages is the possibility of shifting between magnified and normal view, from the screen to the surgical field and vice versa, which is very uncomfortable in the case of the microscope and simply impossible in the case of the endoscope. A recent work from Rösler et al. depicted some difficulties in completing surgical operation without switching to the microscope, mainly due to difficulties in eye-hand coordination and depth perception, but these issues seemed to be related more to the need to adjust to a new system rather than to intrinsic flaws of the exoscope technology. In a recent review regarding the use of the exoscope in neurosurgery, intraoperative complications due to visualization difficulties were analyzed, providing proof of the acceptable safety profile of this solution (Abramovic et al., 2022; Calloni et al., 2022; Piquer et al., 2014; Rösler et al., 2022). An additional value in terms of education and training of the exoscope is the fact that its reduced dimensions compared to the microscope makes it easier for surgeons, residents, students and OR staff to follow the surgery on a screen which is equipped with 3D4K technology.
Fig. 3

PubMed publication frequency chart. (A) PubMed results from the search of combination of words related to traditional visualization tools in neurosurgery as microscope (blue bars), endoscope (orange bars) and loupes (gray bars); (B) PubMed results from the search of exoscope in neurosurgery; the red square highlights how the publication rate remained low and only started to raise in 2017; (C) PubMed results from the search of augmented reality in neurosurgery. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.)

PubMed publication frequency chart. (A) PubMed results from the search of combination of words related to traditional visualization tools in neurosurgery as microscope (blue bars), endoscope (orange bars) and loupes (gray bars); (B) PubMed results from the search of exoscope in neurosurgery; the red square highlights how the publication rate remained low and only started to raise in 2017; (C) PubMed results from the search of augmented reality in neurosurgery. (For interpretation of the references to colour in this figure legend, the reader is referred to the Web version of this article.) Endoscopy and microscopy underwent a great deal of innovation and consistent research is still ongoing as depicted by the publication rates over the years (Fig. 3A), whilst further innovation with loupes does not appear have risen comparable research interest. The exoscope, on the other hand, is a younger piece of technology that is beginning to raise a regular interest in the research community (Fig. 3B).

The evolution of surgical navigation

Besides the need for the best possible visualization of the surgical field, neurosurgeons increasingly rely on tools and methods to maximize their orientation abilities. From the pre-operative to the intra-operative phase, navigation dictates location and shape of the incision, the best surgical approach and provides a better understanding of which structures can be sacrificed to maximize access and resection and which must be protected.

Imaging

At the dawn of neurosurgery, the clinical picture was really the only element that could guide the surgeon on deciding where and how to operate on a patient. Things changed towards the end of the 19th century when X-rays began being used to observe the inside of the body and techniques like ventriculography and pneumo-encephalography were born (Dandy, 1918; Enchev, 2009). The real revolution was brought about by the development of modern imaging techniques as CT scan (Hounsfield, 1995), MRI scan (Mansfield and Maudsley, 1977), PET, SPECT (Beyer et al., 2000), and ultrasound (Stevenson and Roelandt, 1986). Still, however accurate the information provided, no direct spatial connection existed with the patient that could be used for real-time orientation in the operating room (OR), and the information could only be used to recreate a mental map that the surgeon had to project on the actual patient's head or spine (Table 2).
Table 2

Current orientation and navigation systems and key features.

View of the surgical fieldDirect connection to patientIntra-operative settingCoupling with visualization tools
Imaging techniques alone/NoPossibleNo
Frame-based stereotaxyNoYesNoaNo
Optic-based Frameless navigationYesYesYesYes
Magnetic Frameless navigationYesYesYesYes

Calculations are made on pre-operative exam outside the operating room.

Current orientation and navigation systems and key features. Calculations are made on pre-operative exam outside the operating room.

Frame-based stereotaxis

Simultaneously, at the beginning of the 20th century, the first real operative navigation system was developed: the frame-based stereotaxy. This system focused on the localization of intracranial structures based on a coordinate system, calculated from a series of superficial landmarks and a neuroanatomy atlas (Horsley and Clarke, 1908). The subsequent adaptation to CT and MRI scans constituted an incredible breakthrough which allowed for the first time to physically connect patient-specific imaging information with the actual patient, providing a series of numbers that physically guided the surgeon in the OR (Heilbrun et al., 1983). As incredible as it might seem, this first and currently still used navigation system does not rely at all on the visualization of a surgical field, which the surgeon never sees, but only on the coordinates provided by the imaging and related calculations (Fig. 1). Such a simple surgical plan, while on one hand allowing a limited range of procedures as electrode placement or biopsies, on the other, it was the ideal candidate for the application of a robot-based system, which has been validated and currently used world-wide (Marcus et al., 2018). The main drawbacks of the frame-based stereotaxis, apart from the obvious blindness for the whole surgical procedure, is the difficulty in integrating anatomical or functional information and the significant discomfort for the patient (Table 2).

Frameless navigation systems

It was only in the second half of the 1980s that the concept of navigation as we understand it today, in the form of frameless stereotaxy, was developed. Roberts et al. presented the first prototype, a system composed of a microscope, a solenoid and ultrasound-emitting sources, whose data were digitized, combined with a pre-selected target on a CT or MR study and projected into the microscope oculars (Enchev, 2009; Roberts et al., 1986). In the early 1990s, along with ultrasound-based systems, the well known optic-based and magnetic-based navigation systems were born (Fig. 1). (Barnett et al., 1993; Kato et al., 1991) The first use of a stereo-camera to localize a tracker in the three-dimensional space. The tracker is secured to the patient in order to allow co-registration of the CT or MRI scan to the actual patient anatomy. This method has the drawback of having to maintain the line of sight free of obstacles between the camera and the tracker. On the other hand, the electro-magnetic-based system recognizes the position of the patient and tools thanks to the interference introduced in an artificially generated magnetic field. In this case, the presence of metals and additional sources of magnetic fields could potentially interfere (Table 2). A limitation which is still present and specifically relates to brain surgery is the occurrence of brain shift, which consists in the natural deformation of brain parenchyma once the skull and the meninges are open (Gerard et al., 2021). In order to tackle this issue, there has been a great effort in bringing the image acquisition systems inside the OR, allowing for intra-operative acquisition of anatomy information that can be used to update the navigational map and hence account for brain shift as well as the ongoing tissue manipulation (Grönemeyermd et al., 1995; Matula et al., 1998). Multiple solution involving CT and MRI scans, hybrid ORs with moving scanners have been developed but with no widespread adoption, due to high costs, the need for dedicated structures and staff. A different but clever option relies on the use of navigated ultrasound or automated adjustment algorithms, which present clear advantages in terms of both surgeon and patient comfort, simplicity of use and costs (Gerard et al., 2021; Iversen et al., 2018). Beyond anatomical orientation, the beauty of navigation systems is that they can effectively include functional information. Functional MRI provides information regarding the increase in oxygen demand related to the performance of specific tasks such as speaking or moving. This information can be overlaid on the navigated imaging allowing the surgeon to avoid eloquent areas. More recent and innovative functional imaging techniques such as tractography and navigated transcranial magnetic stimulation provide even more precise and accurate information for both cortical and subcortical mapping (Coenen et al., 2005; Julkunen et al., 2009). The most relevant limitation about the current navigation systems is that they provide information on a two-dimensional screen, far from the surgical field, still forcing the surgeon to shift his/her attention to the screen and back. For example, the microscope often needs to be moved out of the way in order to evaluate the position of a navigated probe, whereas in the case of the endoscope the surgeon only has to look at different screens, while less impairment is caused in the case of surgical loupes or exoscope due to their higher degree of maneuverability. While beyond the scope of this work, a special mention is necessary regarding the field of intraoperative neuromonitoring (IOM) techniques. What is most interesting about IOM is that they guide the surgical procedure without any type of image and, in case of discordance between anatomic and functional data, the functional data is the one that takes priority, leaving behind the anatomical information provided by imaging (Keeble et al., 2021; Sala, 2010).

Augmented reality as the next step

Augmented reality (AR) refers to the technology which enables the generation and projection of digital content into the physical environment, usually visualized by means of transparent screens or special spectacles. It differs from Virtual Reality (VR) in that the viewer still perceives the physical world around them and is not immersed in a completely digital, artificially generated environment (Brigham, 2017; Milgram and Charbel, 1321). The very first prototype was designed in the 1960s, (Schmalstieg, Hollerer) but it was only in the last two decades that there has been an increase in the interest of using such technology, particularly in the medical world. Augmented Reality can have a variety of applications in neurosurgery, from planning to rehearsal, to intra-operative use and training (Table 3). We systematically reviewed the available literature to explore current AR applications in neurosurgery and the relationship with different visualization and navigation tools. We applied the query ‘augmented reality’ AND ‘neurosurgery’ to PubMed and Scopus databases and followed the PRISMA guidelines for systematic reviews (Fig. 4). We included a total of 113 original articles, of which 103 consisted of clinical applications at various stages of readiness and 10 were dedicated to AR for training (Table 4). 81 (71.6%) articles focused on brain surgery setting, (Abhari et al., 2015; Alaraj et al., 2013; Alfonso-Garcia et al., 2020; Bernard et al., 2021; Besharati Tabrizi and Mahvash, 2015; Cabrilo et al., 2014a, 2014b, 2014c, 2015; Carl et al., 2019d, 2020a; Coelho et al., 2020; Creighton et al., 2020; Cutolo et al., 2017; Davidovic et al., 2021; Deng et al., 2014), (Dho et al., 2021; van Doormaal et al., 2019, 2021; Drouin et al., 2017; Eftekhar, 2016a, 2016b; Fick et al., 2021; Finger et al., 2017; Gerard et al., 2018; Greuter et al., 2021a, 2021b; Haemmerli et al., 2021; Haouchine et al., 2020a, 2020b, 2022; Henssen et al., 2020), (Hou et al., 2016a; Hou et al., 2016b; Ille et al., 2021; Incekara et al., 2018; Inoue et al., 2013; Ivan et al., 2021; Karmonik et al., 2018; Kersten-Oertel et al., 2015; Koike et al., 2021; Lai et al., 2020; ÉDrouin et al., 2017; ÉReyes et al., 2020; ÉReyes et al., 2018; Li et al., 2016; Li et al., 2021; Louis et al., 2021), (Mahvash and Besharati Tabrizi, 2013; Martirosyan et al., 2015; Maruyama et al., 2018; Mascitelli et al., 2018; Montemurro et al., 2021; Moon et al., 2022; Morales Mojica et al., 2021; Nguyen et al., 2020; Pennacchietti et al., 2021; Petrone et al., 2022; Pojskić et al., 2022; Qi et al., 2021; Rau et al., 2021; Rios-Vicil et al., 2022; Rychen et al., 2020), (Satoh et al., 2019; Satoh et al., 2021; Scherschinski et al., 2022; Schneider et al., 2021; Schneider et al., 2021; Schwam et al., 2021; Shu et al., 2022; Si et al., 2019; Skyrman et al., 2021; Steiert et al., 2022; chenWang lei et al., 2016; Thabit et al., 2022; Van Gestel et al., 2021a; Van Gestel et al., 2021b; Vassallo et al., 2018; Watanabe et al., 2016; Wu et al., 2022; Yavas et al., 2021; Yudkowsky et al., 2013; Zeng et al., 2017) 32 (28.3%) focused on spine neurosurgery setting, (Alaraj et al., 2013; Boyaci et al., 2020; Buch et al., 2021; Burström et al., 2019, 2020; Carl et al., 2019a, 2019b, 2019c, 2020b; Dennler et al., 2020; Edström et al., 2020a, 2020b, 2020c; Elmi-Terander et al., 2016, 2018, 2019, 2020; Felix et al., 2022; Frisk et al., 2022; Harel et al., 2022; Kosterhon et al., 2017; Liebmann et al., 2019; Liu et al., 2022; Manni et al., 2020; Molina et al., 2019, 2021a, 2021b, 2021c; Nguyen et al., 2020; Pojskić et al., 2021; Umebayashi et al., 2018; Urakov et al., 2019; Yoon et al., 2021). Unsurprisingly, the neurosurgery area that presented the highest interest in AR applications was oncology (36.2%) (Abhari et al., 2015; Alfonso-Garcia et al., 2020; Bernard and Bijlenga, 2022; Besharati Tabrizi and Mahvash, 2015; Cabrilo et al., 2014c; Carl et al., 2019b; Carl et al., 2019d; Cutolo et al., 2017; Dho et al., 2021; van Doormaal et al., 2019; Eftekhar, 2016a; Fick et al., 2021; Finger et al., 2017; Gerard et al., 2018; Gerard et al., 2021; Hou et al., 2016b; Incekara et al., 2018; Inoue et al., 2013; Ivan et al., 2021; Koike et al., 2021; Lai et al., 2020; ÉDrouin et al., 2017; ÉReyes et al., 2020; Li et al., 2016; Louis et al., 2021; Mahvash and Besharati Tabrizi, 2013; Mascitelli et al., 2018; Montemurro et al., 2021; Pennacchietti et al., 2021; Pojskić et al., 2022; Qi et al., 2021; Rios-Vicil et al., 2022; Satoh et al., 2019; Satoh et al., 2021; Schwam et al., 2021; Shu et al., 2022; chenWang lei et al., 2016; Watanabe et al., 2016; Yavas et al., 2021), followed by degenerative spine (25.6%) (Boyaci et al., 2020; Buch et al., 2021; Burström et al., 2020; Carl et al., 2019a, 2020b; Edström et al., 2020a, 2020b; Elmi-Terander et al., 2016, 2018, 2019, 2020; Felix et al., 2022; Frisk et al., 2022; Harel et al., 2022; Kosterhon et al., 2017; Liebmann et al., 2019; Manni et al., 2020; Molina et al., 2021c; Pojskić et al., 2021; Umebayashi et al., 2018), vascular (13.3%) (Cabrilo et al., 2014a, 2014b, 2015; Carl et al., 2020a; Greuter et al., 2021a, 2021b; Haouchine et al., 2020b; Karmonik et al., 2018; Kersten-Oertel et al., 2015; Louis et al., 2021; Martirosyan et al., 2015; Moon et al., 2022; Rychen et al., 2020; Scherschinski et al., 2022; Vassallo et al., 2018), hydrocephalus and functional trauma (both 6.2%) (van Doormaal et al., 2021; Eftekhar, 2016b; Hou et al., 2016a; Li et al., 2021; Rau et al., 2021; Schneider et al., 2021; Skyrman et al., 2021; Van Gestel et al., 2021a, 2021b; Wu et al., 2022; Yudkowsky et al., 2013; Zeng et al., 2017) and reconstructive surgery (2.6%) (Coelho et al., 2020; Steiert et al., 2022; Thabit et al., 2022). 14.1% of the articles presented general purposes application in neurosurgery with no focus on a specific disease (Alaraj et al., 2013; Creighton et al., 2020; Davidovic et al., 2021; Deng et al., 2014; Drouin et al., 2017; Haemmerli et al., 2021; Haouchine et al., 2020a; Haouchine et al., 2022; Henssen et al., 2020; Ille et al., 2021; ÉReyes et al., 2018; Maruyama et al., 2018; Morales Mojica et al., 2021; Nguyen et al., 2020; Petrone et al., 2022; Si et al., 2019); ; ; . With regards to the integration with the current visualization systems, the microscope is by far the most frequently utilized (29.2%) (Alfonso-Garcia et al., 2020; Bernard et al., 2021; Cabrilo et al., 2014a; Cabrilo et al., 2014b; Cabrilo et al., 2014c; Cabrilo et al., 2015; Carl et al., 2020a; Carl et al., 2019a; Carl et al., 2019b; Carl et al., 2020b; Carl et al., 2019c; Carl et al., 2019d; Drouin et al., 2017; Gerard et al., 2018; Greuter et al., 2021a; Haemmerli et al., 2021; Haouchine et al., 2020a; Haouchine et al., 2022; Haouchine et al., 2020b; Koike et al., 2021; Kosterhon et al., 2017; Louis et al., 2021; Martirosyan et al., 2015; Mascitelli et al., 2018; Pojskić et al., 2021; Pojskić et al., 2022; Rios-Vicil et al., 2022; Rychen et al., 2020; Scherschinski et al., 2022; Schwam et al., 2021; chenWang lei et al., 2016; Umebayashi et al., 2018; Vassallo et al., 2018) followed by the endoscope (3.5%) (Finger et al., 2017; Lai et al., 2020; Li et al., 2016; Pennacchietti et al., 2021) and the exoscope (0.9%) (Kersten-Oertel et al., 2015). In general, we found that AR applications tend to be employed independently to current visualization systems and use their own settings as head-up displays, tablets and smartphones (66%). (Abhari et al., 2015; Alaraj et al., 2013; Besharati Tabrizi and Mahvash, 2015; Boyaci et al., 2020; Buch et al., 2021; Burström et al., 2019, 2020; Coelho et al., 2020; Creighton et al., 2020; Cutolo et al., 2017; Davidovic et al., 2021; Deng et al., 2014; Dennler et al., 2020; Dho et al., 2021; van Doormaal et al., 2019, 2021; Edström et al., 2020a, 2020b, 2020c; Eftekhar, 2016a, 2016b; Elmi-Terander et al., 2016, 2018, 2019, 2020; Felix et al., 2022; Fick et al., 2021; Frisk et al., 2022; Greuter et al., 2021b; Harel et al., 2022; Henssen et al., 2020; Hou et al., 2016a), (Hou et al., 2016b; Ille et al., 2021; Incekara et al., 2018; Inoue et al., 2013; Ivan et al., 2021; Karmonik et al., 2018; ÉDrouin et al., 2017; ÉReyes et al., 2020; ÉReyes et al., 2018; Li et al., 2021; Liebmann et al., 2019; Liu et al., 2022; Mahvash and Besharati Tabrizi, 2013; Manni et al., 2020; Maruyama et al., 2018; Molina et al., 2021a; Molina et al., 2021b; Molina et al., 2021c; Molina et al., 2019; Montemurro et al., 2021; Moon et al., 2022; Morales Mojica et al., 2021; Nguyen et al., 2020; Petrone et al., 2022; Qi et al., 2021; Rau et al., 2021; Satoh et al., 2019), (Satoh et al., 2021; Schneider et al., 2021; Shu et al., 2022; Si et al., 2019; Skyrman et al., 2021; Steiert et al., 2022; Thabit et al., 2022; Urakov et al., 2019; Van Gestel et al., 2021a, 2021b; Watanabe et al., 2016; Wu et al., 2022; Yavas et al., 2021; Yoon et al., 2021; Yudkowsky et al., 2013; Zeng et al., 2017) On the other hand, the navigation setting most frequently employed was the frameless one (62.8%) (Bernard et al., 2021; Besharati Tabrizi and Mahvash, 2015; Burström et al., 2019, 2020; Cabrilo et al., 2014a, 2014b, 2014c, 2015; Carl et al., 2019a, 2019b, 2019c, 2019d, 2020a, 2020b; Cutolo et al., 2017; Davidovic et al., 2021; Deng et al., 2014; van Doormaal et al., 2019; Drouin et al., 2017; Edström et al., 2020a, 2020b, 2020c), (Elmi-Terander et al., 2020; Elmi-Terander et al., 2019; Elmi-Terander et al., 2018; Elmi-Terander et al., 2016; Felix et al., 2022; Fick et al., 2021; Frisk et al., 2022; Gerard et al., 2018; Greuter et al., 2021a; Haemmerli et al., 2021; Haouchine et al., 2020b; Harel et al., 2022; Inoue et al., 2013; Kersten-Oertel et al., 2015; Kosterhon et al., 2017; Lai et al., 2020; ÉReyes et al., 2020; ÉReyes et al., 2018; Li et al., 2016), (Liu et al., 2022; Maruyama et al., 2018; Mascitelli et al., 2018; Molina et al., 2019, 2021a, 2021b, 2021c; Montemurro et al., 2021; Moon et al., 2022; Pennacchietti et al., 2021; Petrone et al., 2022; Pojskić et al., 2021, 2022; Qi et al., 2021; Rios-Vicil et al., 2022; Rychen et al., 2020; Satoh et al., 2019, 2021; Scherschinski et al., 2022), (Schneider et al., 2021; Schwam et al., 2021; Si et al., 2019; Skyrman et al., 2021; chenWang lei et al., 2016; Umebayashi et al., 2018; Van Gestel et al., 2021b; Watanabe et al., 2016; Yavas et al., 2021; Zeng et al., 2017) followed by independent navigation/registration techniques which involved either manual registration or customized surface recognition algorithms (27.4%) (Abhari et al., 2015; Alfonso-Garcia et al., 2020; Boyaci et al., 2020; Buch et al., 2021; Creighton et al., 2020; Dennler et al., 2020; Dho et al., 2021; Eftekhar, 2016a; Eftekhar, 2016b; Haouchine et al., 2020a; Hou et al., 2016a; Hou et al., 2016b; Incekara et al., 2018; Ivan et al., 2021; Koike et al., 2021; ÉDrouin et al., 2017; Li et al., 2021; Liebmann et al., 2019; Louis et al., 2021; Mahvash and Besharati Tabrizi, 2013; Manni et al., 2020; Martirosyan et al., 2015; Nguyen et al., 2020; Rau et al., 2021; Shu et al., 2022; Steiert et al., 2022; Thabit et al., 2022; Urakov et al., 2019; Van Gestel et al., 2021a; Vassallo et al., 2018; Wu et al., 2022). 9.7% did not employ any navigation system as they did not require patient co-registration (eg. anatomy visualization for training or case rehearsal) (Alaraj et al., 2013; Coelho et al., 2020; van Doormaal et al., 2021; Greuter et al., 2021b; Haouchine et al., 2022; Henssen et al., 2020; Ille et al., 2021; Karmonik et al., 2018; Morales Mojica et al., 2021; Yoon et al., 2021; Yudkowsky et al., 2013). Finally, the site of projection of the AR information was most frequently the patient itself (77%), (Abhari et al., 2015; Alfonso-Garcia et al., 2020; Bernard et al., 2021; Besharati Tabrizi and Mahvash, 2015; Boyaci et al., 2020; Buch et al., 2021; Cabrilo et al., 2014a, 2014b, 2014c, 2015; Carl et al., 2019a, 2019b, 2019c, 2020a; Creighton et al., 2020; Cutolo et al., 2017; Davidovic et al., 2021; Deng et al., 2014; Dennler et al., 2020; Dho et al., 2021; van Doormaal et al., 2019; Drouin et al., 2017), (Eftekhar, 2016a, 2016b; Felix et al., 2022; Fick et al., 2021; Frisk et al., 2022; Greuter et al., 2021a; Haemmerli et al., 2021; Haouchine et al., 2020a, 2020b; Harel et al., 2022; Hou et al., 2016a, 2016b; Incekara et al., 2018; Inoue et al., 2013; Ivan et al., 2021), (Koike et al., 2021; Kosterhon et al., 2017; ÉDrouin et al., 2017; ÉReyes et al., 2020; ÉReyes et al., 2018; Li et al., 2021; Liebmann et al., 2019; Liu et al., 2022; Louis et al., 2021; Mahvash and Besharati Tabrizi, 2013; Manni et al., 2020; Martirosyan et al., 2015; Maruyama et al., 2018; Mascitelli et al., 2018; Molina et al., 2021a; Molina et al., 2021b; Molina et al., 2021c; Molina et al., 2019; Montemurro et al., 2021; Moon et al., 2022; Nguyen et al., 2020; Petrone et al., 2022; Pojskić et al., 2021; Pojskić et al., 2022; Qi et al., 2021; Rau et al., 2021), (Rios-Vicil et al., 2022; Rychen et al., 2020; Satoh et al., 2019, 2021; Scherschinski et al., 2022; Schneider et al., 2021; Schwam et al., 2021; Shu et al., 2022; Si et al., 2019; Skyrman et al., 2021; Steiert et al., 2022), (chenWang lei et al., 2016; Thabit et al., 2022; Umebayashi et al., 2018; Urakov et al., 2019; Van Gestel et al., 2021a; Van Gestel et al., 2021b; Vassallo et al., 2018; Watanabe et al., 2016; Wu et al., 2022; Yavas et al., 2021; Zeng et al., 2017) followed by the navigation screen (15%) (Burström et al., 2019, 2020; Edström et al., 2020a, 2020b, 2020c; Elmi-Terander et al., 2016, 2018, 2019, 2020; Finger et al., 2017; Gerard et al., 2018; Haouchine et al., 2022; Kersten-Oertel et al., 2015; Lai et al., 2020; Li et al., 2021; Pennacchietti et al., 2021); 8.8% of the papers proposed systems projecting information in front of the surgeon with no specific relationship with the patient or a screen (Alaraj et al., 2013; Coelho et al., 2020; van Doormaal et al., 2021; Greuter et al., 2021b; Henssen et al., 2020; Ille et al., 2021; Karmonik et al., 2018; Morales Mojica et al., 2021; Yoon et al., 2021; Yudkowsky et al., 2013).
Table 3

Potential augmented reality improvements and limitations of current visualization and navigation methods.

Surgery PhaseVisualization or navigation method to which AR is appliedPotential AR improvementsAR limitations
Planning/Optimized incision and craniotomy, reduced need for localizing X-ray for spine proceduresNeed for additional head-mounted or mobile display
Intra-operativeAllOverlay of anatomical and functional information on surgical field; possibility to include patient information on surgeon's field of view
Intra-operativeMicroscopeNo need to remove focus from surgical field to obtain navigation information or additional useful anatomical and functional information
Intra-operativeEndoscope/ExoscopeNavigation, anatomical and functional information available directly on operative screenLimitations in stereopsis rendering on two-dimensional screens
Intra-operativeSurgical LoupesNo need to remove focus from surgical field to obtain navigation information or additional useful anatomical and functional informationNeed to combine current loupes with additional AR functions, potential fatigue and nausea from prolonged use
Planning/Intra-operativeStereotactic proceduresPossibility to visualize anatomical/pathological target, reduced risk of multiple passNeed for additional head-mounted or mobile display

AR: augmented reality.

Fig. 4

PRISMA flow diagram. PRISMA flow diagram resulted from PubMed and Scopus databases search and subsequent study screening and selection.

Table 4

List of original articles and related information resulted from PubMed and Scopus databases search of “augmented reality” AND “neurosurgery”.

AuthorYearTypeAnatomical locationTopicVisualization systemNavigation systemProjection field
Abhari et al.2015trainingbrainoncologyindependentindependentPatient
Alaraj et al.2013clinicalbrain and spinegeneralindependentnopoint-of-view
Alfonso- Garcia et al.2020clinicalbrainoncologymicroscopeindependentpatient
Bernard et al.2021clinicalbrainoncologymicroscopeframelesspatient
Besharati et al.2015clinicalbrainoncologyindependentframelesspatient
Boyaci et al.2020trainingspinedegenerativeindependentindependentpatient
Buch et al.2021clinicalspinedegenerativeindependentindependentpatient
Burström et al.2019clinicalspinespineindependentframelessscreen
Burström et al.2020clinicalspinedegenerativeindependentframelessscreen
Cabrilo et al.2014clinicalbrainvascularmicroscopeframelesspatient
Cabrilo et al.2015clinicalbrainvascularmicroscopeframelesspatient
Cabrilo et al.2014clinicalbrainvascularmicroscopeframelesspatient
Cabrilo et al.2014clinicalbrainoncologymicroscopeframelesspatient
Carl et al.2019clinicalbrainoncologymicroscopeframelesspatient/screen
Carl et al.2020clinicalbrainvascularmicroscopeframelesspatient
Carl et al.2019clinicalspineoncologymicroscopeframelesspatient
Carl et al.2020clinicalspinedegenerative/oncologymicroscopeframelesspatient
Carl et al.2019clinicalspineoncologymicroscopeframelesspatient
Carl et al.2019clinicalspinedegenerativemicroscopeframelesspatient
Coelho et al.2020clinicalbrainmalformation/reconstructionindependentnopoint-of-view
Creigthon et al.2020clinicalbraingeneralindependentindependentpatient
Cutolo et al.2017clinicalbrainoncologyindependentframelesspatient
Davidovic et al.2021clinical/trainingbraingeneralindependentframelesspatient
Deng et al.2014clinicalbraingeneralindependentframelesspatient
Dennler et al.2020clinicalspinespineindependentindependentpatient
Dho et al.2021clinicalbrainoncologyindependentindependentpatient
Drouin et al.2017clinicalbraingeneralmicroscopeframelesspatient
Edström et al.2020clinicalspinespineindependentframelessscreen
Edström et al.2020clinicalspinedegenerativeindependentframelessscreen
Edström et al.2020clinicalspinedegenerativeindependentframelessscreen
Eftekhar2016clinicalbrainhydrocephalusindependentindependentpatient
Eftekhar2016clinicalbrainoncologyindependentindependentpatient
Elmi-Terander et al.2020clinicalspinedegenerativeindependentframelessscreen
Elmi-Terander et al.2016clinicalspinedegenerativeindependentframelessscreen
Elmi-Terander et al.2018clinicalspinedegenerativeindependentframelessscreen
elmi-Terander et al.2019clinicalspinedegenerativeindependentframelessscreen
Felix et al.2022clinicalspinedegenerativeindependentframelesspatient
Fick et al.2021clinicalbrainoncologyindependentframelesspatient
Finger et al.2017clinicalbrainoncologyendoscopeframelessscreen
Frisk et al.2022clinicalspinedegenerativeindependentframelesspatient
Gerard et al.2018clinicalbrainoncologymicroscopeframelessscreen
Greuter et al.2021trainingbrainvascularindependentnopoint-of-view
Greuter et al.2021clinicalbrainvascularmicroscopeframelesspatient
Haemmerli et al.2021clinicalbraingeneralmicroscopeframelesspatient
Haouchine et al.2022clinicalbraingeneralmicroscopenoscreen
Haouchine et al.2020clinicalbrainvascularmicroscopeframelesspatient
Haouchine et al.2020clinicalbraingeneralmicroscopeindependentpatient
Harel et al.2022clinicalspinedegenerativeindependentframelesspatient
Henssen et al.2020trainingbraingeneralindependentnopoint-of-view
Hou et al.2016clinicalbrainoncologyindependentindependentpatient
Hou et al.2016clinicalbrainfunctional/traumaindependentindependentpatient
Ille et al.2021trainingbraingeneralindependentnopoint-of-view
Incekara et al.2018clinicalbrainoncologyindependentindependentpatient
Inoue et al.2013clinicalbrainoncologyindependentframelesspatient
Ivan et al.2021clinicalbrainoncologyindependentindependentpatient
Karmonik et al.2018clinicalbrainvascularindependentnopoint-of-view
Kersten-Oertel et al.2015clinicalbrainvascularexoscopeframelessscreen
Koike et al.2021clinicalbrainoncologymicroscopeindependentpatient
Kosterhon et al.2017clinicalspinedegenerativemicroscopeframelesspatient
Lai et al.2020clinicalbrainoncologyendoscopeframelessscreen
Léger et al.2020clinicalbrainoncologyindependentframelesspatient
Léger et al.2017clinicalbrainoncologyindependentindependentpatient
Léger et al.2018clinicalbraingeneralindependentframelesspatient
Li et al.2016clinicalbrainoncologyendoscopeframelessscreen
Li et al.2021clinicalbrainfunctional/traumaindependentindependentpatient
Liebmann et al.2019clinicalspinedegenerativeindependentindependentpatient
Liu et al.2022clinicalspinespineindependentframelesspatient
Louis et al.2021clinicalbrainoncology/vascularmicroscopeindependentpatient
Mahvash and Besharati Tabrizi2013clinicalbrainoncologyindependentindependentpatient
Manni et al.2020clinicalspinedegenerativeindependentindependentpatient
Martirosyan et al.2015clinicalbrainvascularmicroscopeindependentpatient
Maruyama et al.2018clinicalbraingeneralindependentframelesspatient
Mascitelli et al.2018clinicalbrainoncologymicroscopeframelesspatient
Molina et al.2021clinicalspinespineindependentframelesspatient
Molina et al.2021clinicalspinespineindependentframelesspatient
Molina et al.2019clinicalspinespineindependentframelesspatient
Molina et al.2021clinicalspinedegenerativeindependentframelesspatient
Montemurro et al.2021clinicalbrainoncologyindependentframelesspatient
Moon et al.2022clinicalbrainoncology/vascularindependentframelesspatient
Morales Majica et al.2021clinicalbraingeneralindependentnopoint-of-view
Nguyen et al.2020clinicalbrain and spinegeneralindependentindependentpatient
Pennacchietti et al.2021clinicalbrainoncologyendoscopeframelessscreen
Petrone et al.2022trainingbraingeneralindependentframelesspatient
Pojskic et al.2022clinicalbrainoncologymicroscopeframelesspatient
Pojskic et al.2021clinicalspinedegenerativemicroscopeframelesspatient
Qi et al.2021clinicalbrainoncologyindependentframelesspatient
Rau et al.2021clinicalbrainfunctionalindependentindependentpatient
Rios-Vicil et al.2022clinicalbrainoncologymicroscopeframelesspatient
Rychen et al.2020clinicalbrainvascularmicroscopeframelesspatient
Satoh et al.2021clinicalbrainoncologyindependentframelesspatient
Satoh et al.2019clinicalbrainoncology/functionalindependentframelesspatient
Scherschinski et al.2022clinicalbrainvascularmicroscopeframelesspatient
Schneider et al.2021clinicalbrainhydrocephalusindependentframelesspatient
Schwam et al.2021clinicalbrainoncologymicroscopeframelesspatient
Shu et al.2022clinicalbrainoncologyindependentindependentpatient
Si et al.2019trainingbraingeneralindependentframelesspatient
Skyrman et al.2021clinicalbrainhydrocephalus/functionalindependentframelesspatient
Steiert et al.2022clinicalbrainmalformation/reconstructionindependentindependentpatient
Sun et al.2016clinicalbrainoncologymicroscopeframelesspatient
Thabit et al.2022clinicalbrainmalformation/reconstructionindependentindependentpatient
Umebayashi et al.2018clinicalspinedegenerativemicroscopeframelesspatient
Urakov et al.2019clinicalspinespineindependentindependentpatient
Van Doormaal et al.2021clinicalbrainhydrocephalusindependentnopoint-of-view
Van Doormaal et al.2019clinicalbrainoncologyindependentframelesspatient
Van Gestel et al.2021clinicalbrainhydrocephalusindependentindependentpatient
Van Gestel et al.2021trainingbrainhydrocephalusindependentframelesspatient
Vassallo et al.2018clinicalbrainvascularmicroscopeindependentpatient
Watanabe et al.2016clinicalbrainoncologyindependentframelesspatient
Wu et al.2022clinicalbrainfunctionalindependentindependentpatient
Yavas et al.2021clinicalbrainoncologyindependentframelesspatient
Yoon et al.2021clinicalspinespineindependentnopoint-of-view
Yudkowsky et al.2013trainingbrainhydrocephalusindependentnopoint-of-view
Zeng et al.2017clinicalbrainfunctionalindependentframelesspatient
Potential augmented reality improvements and limitations of current visualization and navigation methods. AR: augmented reality. PRISMA flow diagram. PRISMA flow diagram resulted from PubMed and Scopus databases search and subsequent study screening and selection. List of original articles and related information resulted from PubMed and Scopus databases search of “augmented reality” AND “neurosurgery”. The information provided by an AR device can be unrelated to the position of the patient, for example the visualization of vital signs, MRI images or IOM signals on one corner of the surgeon's visual field. More interestingly, it can be registered to the patient's actual position. In this case, it allows the visualization of digitally rendered anatomical structures, lesions, functional areas, sites for ideal incisions and craniotomies as well as potential surgical trajectories (Cutolo et al., 2017; Elmi-Terander et al., 2020; Li et al., 2021). There are three main reasons supporting the great potential of augmented reality in improving current visualization and navigation systems. Firstly, the projection of digital information bears invariant principles that can be applied to all surgical visualization tools, providing equivalent advantages to all of them at the same time. Secondly, while microscope, endoscope, loups and exoscope were developed and continuously refined over time in order to improve the visualization of a surface, AR provides information that goes beyond angles and above edges as well providing a see-through experience that allows the surgeon to adjust every surgical move in real-time. Interestingly, it has emerged from the results of our review, that while microscopes and endoscopes are the visualization tools most frequently utilized, in the majority of cases AR applications tend to be applied to independent supports as head-mounted displays, tablets and smartphones, showing the versatility of this type of technology. Thirdly, but not last, it has the potential to finally bring the focus of the surgeon back into the surgical field, which proved to be the focus in the vast majority of AR related work (77% of cases). In the realm of brain surgery, as presented above, the main applications of AR focus on localization of tumors or vascular malformations with good reported accuracies (1–4 ​mm error compared to standard neuro-navigation) (Besharati Tabrizi and Mahvash, 2015; Haouchine et al., 2020b; Kersten-Oertel et al., 2015; ÉReyes et al., 2020; Montemurro et al., 2021; Pojskić et al., 2022; Qi et al., 2021; Rychen et al., 2020). Localization is performed through the use of head-mounted or mobile displays which are then removed to make room for the microscope, where the AR information is generally lost. In order to address this problem, microscope-based AR systems have been developed which proved to be accurate and useful particularly for vascular cases where the visualization of hidden vascular structures and malformations allows for safer surgeries (Fig. 2A). Possibly, the greatest advancement brought about by AR will relate to those simple yet frequent interventions, where the surgical field is not currently visualized. Stereotactic procedures as biopsies, deep brain stimulations or simple ventricular catheter placement could finally be placed whilst looking directly at their target, completely changing the surgical experience, minimizing multiple passes, increasing safety and reducing operating times (Cho et al., 2020; Li et al., 2021; Satoh et al., 2019). A less frequent but most intriguing application regards the possibility to plan and project anatomical information regarding skull reconstruction in pediatric patients with cranial deformities (Coelho et al., 2020; Thabit et al., 2022). In the case of endoscopic surgery, where the field of view is very narrow and the presence of distorted anatomy can be very challenging for the surgeon's orientation, the use of AR allows the visualization of key structures as carotids and optic nerves or simply provides the ability to find the midline, making surgeries significantly safer. (Fig. 2B). With regards to spine surgery, a significant difference can be made here as the surgeon already uses a head-mounted tool, the surgical loupes. As can be expected, the majority of works focused on the possibility to improve accuracy in pedicle screw placement. Elmi-Terander et al. along with others, assessed feasibility and accuracy in placing pedicle screws using a head-mounted AR display or screen-based navigation, observing increased accuracy compared to free-hand technique, reduced intra-operative X-ray use and times comparable to current navigation systems (Fig. 2C). (Burström et al., 2020; Burström et al., 2019; Elmi-Terander et al., 2020; Elmi-Terander et al., 2019; Elmi-Terander et al., 2016Elmi-Terander et al., 2016) In the case of loupes, the possibility of combining the zooming function of the loups with the projection of AR information could be interesting, as it is currently not implemented. Some interesting work has been done in spinal cord oncology as well by Carl et al., where they focused on trying to bring tumor and structure location information through the lens of the microscope (Carl et al., 2019a, 2019b, 2019c, 2020b). It is easy to see how AR is bringing the attention and focus of the surgeon back into the surgical field, avoiding the need for a continuous shift between field and screen. In this regard, it will be interesting to follow the ongoing debate on the role of AR in relation to robotic neurosurgery specifically in spine surgery. While robots can provide an advantage in terms of surgical touch, providing steady trajectories and avoiding human physiological tremor, they tend again to take the surgeon away from the field, reducing the important exposure to intra-operative surgical anatomy and even limiting the ability to deal with sudden complications. AR brings the surgeon closer, improves both in-person visualization and orientation, and it has clear advantages in terms of costs and the possibility of widespread adoption compared to robots as well as some navigation systems, microscopes and exoscopes. Thanks to advancements in computational power and data availability the research interest in AR applications has started to explode over the recent years and is predicted to continue expanding (Fig. 3C). Another area in which AR applications are proving to be useful is training, as almost 1 out of 10 of the papers we found focused on this (Abhari et al., 2015; Boyaci et al., 2020; Davidovic et al., 2021; Greuter et al., 2021b; Henssen et al., 2020; Ille et al., 2021; Petrone et al., 2022; Si et al., 2019; Van Gestel et al., 2021b; Yudkowsky et al., 2013). Almost all areas of neurosurgery are actively assessing the possibility to use AR as a training tool, from spine surgery to ventricular drain placement, to white tracts dissection to the study of neurovascular anatomy. In terms of basic study of surgical anatomy, AR applications proved to be welcomed from students and residents with a potential reduction in the perceived cognitive load compared to the study of the same structures from textbooks and atlases (Greuter et al., 2021b). In addition, multiple efforts are being made to develop AR-based neurosurgical training simulators, with either haptic feedback or synthetic material in order for surgeons to experience realistic surgical operations comparable to real neurosurgery (Yudkowsky et al., 2013). In terms of training for specific procedures, examples of explored tasks are ventriculostomy and pedicle screw placement, with some interesting works observing an improvement in the accuracy for both trained and untrained subjects who underwent AR-training (Alaraj et al., 2013; Si et al., 2019). While AR -based training can be a very useful tool, it cannot be understated the importance for students and residents to be concurrently exposed to actual surgeries and an adequate amount of surgical time.

Challenges of AR

Even though AR has great potential in neurosurgery, challenges remain. Firstly, additional studies are needed to validate the safe application of this technology, making sure to understand all the potential sources of error that the use of such systems can introduce, from pre-operative structure segmentation, to patient registration and information projection (Di Ieva et al., 2014). Secondly, the use of heavy head-mounted display can cause fatigue, headache and nausea after long hours; this is why there is the need to develop lighter, more comfortable and less tiring displays. The problem of brain shift is not resolved by the simple implementation of AR systems. The study of surface deformation features could potentially allow the automated real-time adjustment of the structure projected, but this solution is still a work in progress (Gerard et al., 2018; Haouchine et al., 2020a). Additional logistic, hardware, software limitations have to be solved before the widespread adoption of these systems. In the struggle for the advancement of the field, the introduction of augmented reality technologies is set to be a real breakthrough. The development and clinical validation of augmented reality applications in neurosurgery, have the potential to make surgeries safer, improve the surgical experience, provide better patient outcomes, and reduce health-related costs.

Authors’ contributions

AB and GMVP were responsible for the ideation of the work; AB and SN conducted database search, study selection and data extraction; AB, GMVP, FM, AF, SN and FSi contributed to the writing of the manuscript; AF, MB, FSa contributed to the review of the manuscript; AB and GMVP produced the images and tables; MB and FSa supervised the work.

Declaration of competing interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper. The present work didn't receive any funding nor was supported by any grant.
  154 in total

1.  Keyhole to the brain: Walter Dandy and neuroendoscopy.

Authors:  Wesley Hsu; Khan W Li; Markus Bookland; George I Jallo
Journal:  J Neurosurg Pediatr       Date:  2009-05       Impact factor: 2.375

2.  Harold Hopkins and optical systems for urology--an appreciation.

Authors:  J G Gow
Journal:  Urology       Date:  1998-07       Impact factor: 2.649

3.  Intraoperative neurophysiology is here to stay.

Authors:  Francesco Sala
Journal:  Childs Nerv Syst       Date:  2010-02-24       Impact factor: 1.475

4.  MARIN: an open-source mobile augmented reality interactive neuronavigation system.

Authors:  Étienne Léger; Jonatan Reyes; Simon Drouin; Tiberiu Popa; Jeffery A Hall; D Louis Collins; Marta Kersten-Oertel
Journal:  Int J Comput Assist Radiol Surg       Date:  2020-04-22       Impact factor: 2.924

5.  Augmented Reality with Virtual Cerebral Aneurysms: A Feasibility Study.

Authors:  Christof Karmonik; Saba N Elias; Jonathan Y Zhang; Orlando Diaz; Richard P Klucznik; Robert G Grossman; Gavin W Britz
Journal:  World Neurosurg       Date:  2018-08-02       Impact factor: 2.104

6.  The subependymal microvascular network revealed by endoscopic fluorescence angiography.

Authors:  Pierluigi Longatti; Alessandro Boaro; Giuseppe Canova; Alessandro Fiorindi
Journal:  J Neurosurg Sci       Date:  2017-11-07       Impact factor: 2.279

7.  Augmented reality in the surgery of cerebral aneurysms: a technical report.

Authors:  Ivan Cabrilo; Philippe Bijlenga; Karl Schaller
Journal:  Neurosurgery       Date:  2014-06       Impact factor: 4.654

8.  Spine Surgery Supported by Augmented Reality.

Authors:  Barbara Carl; Miriam Bopp; Benjamin Saß; Mirza Pojskic; Benjamin Voellger; Christopher Nimsky
Journal:  Global Spine J       Date:  2020-05-28

9.  Gesture-based registration correction using a mobile augmented reality image-guided neurosurgery system.

Authors:  Étienne Léger; Jonatan Reyes; Simon Drouin; D Louis Collins; Tiberiu Popa; Marta Kersten-Oertel
Journal:  Healthc Technol Lett       Date:  2018-09-27

10.  Pedicle Screw Placement Using Augmented Reality Surgical Navigation With Intraoperative 3D Imaging: A First In-Human Prospective Cohort Study.

Authors:  Adrian Elmi-Terander; Gustav Burström; Rami Nachabe; Halldor Skulason; Kyrre Pedersen; Michael Fagerlund; Fredrik Ståhl; Anastasios Charalampidis; Michael Söderman; Staffan Holmin; Drazenko Babic; Inge Jenniskens; Erik Edström; Paul Gerdhem
Journal:  Spine (Phila Pa 1976)       Date:  2019-04-01       Impact factor: 3.241

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.