Huoling Luo1, Dalong Yin2, Shugeng Zhang2, Deqiang Xiao1, Baochun He3, Fanzheng Meng4, Yanfang Zhang5, Wei Cai6, Shenghao He3, Wenyu Zhang6, Qingmao Hu1, Hongrui Guo4, Shuhang Liang4, Shuo Zhou4, Shuxun Liu4, Linmao Sun4, Xiao Guo4, Chihua Fang6, Lianxin Liu7, Fucang Jia8. 1. Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China. 2. Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China. 3. Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China. 4. Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China. 5. Department of Interventional Radiology, Shenzhen People's Hospital, Shenzhen, China. 6. Department of Hepatobiliary Surgery, Zhujiang Hospital, Southern Medical University, Guangzhou, China. 7. Department of Hepatobiliary Surgery, First Affiliated Hospital of Harbin Medical University, Harbin, China; Department of Hepatobiliary Surgery, Shengli Hospital Affiliated to University of Science and Technology of China, Hefei, China. Electronic address: liulianxin@ems.hrbmu.edu.cn. 8. Research Lab for Medical Imaging and Digital Surgery, Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China; Shenzhen College of Advanced Technology, University of Chinese Academy of Sciences, Shenzhen, China. Electronic address: fc.jia@siat.ac.cn.
Abstract
OBJECTIVE: Understanding the three-dimensional (3D) spatial position and orientation of vessels and tumor(s) is vital in laparoscopic liver resection procedures. Augmented reality (AR) techniques can help surgeons see the patient's internal anatomy in conjunction with laparoscopic video images. METHOD: In this paper, we present an AR-assisted navigation system for liver resection based on a rigid stereoscopic laparoscope. The stereo image pairs from the laparoscope are used by an unsupervised convolutional network (CNN) framework to estimate depth and generate an intraoperative 3D liver surface. Meanwhile, 3D models of the patient's surgical field are segmented from preoperative CT images using V-Net architecture for volumetric image data in an end-to-end predictive style. A globally optimal iterative closest point (Go-ICP) algorithm is adopted to register the pre- and intraoperative models into a unified coordinate space; then, the preoperative 3D models are superimposed on the live laparoscopic images to provide the surgeon with detailed information about the subsurface of the patient's anatomy, including tumors, their resection margins and vessels. RESULTS: The proposed navigation system is tested on four laboratory ex vivo porcine livers and five operating theatre in vivo porcine experiments to validate its accuracy. The ex vivo and in vivo reprojection errors (RPE) are 6.04 ± 1.85 mm and 8.73 ± 2.43 mm, respectively. CONCLUSION AND SIGNIFICANCE: Both the qualitative and quantitative results indicate that our AR-assisted navigation system shows promise and has the potential to be highly useful in clinical practice.
OBJECTIVE: Understanding the three-dimensional (3D) spatial position and orientation of vessels and tumor(s) is vital in laparoscopic liver resection procedures. Augmented reality (AR) techniques can help surgeons see the patient's internal anatomy in conjunction with laparoscopic video images. METHOD: In this paper, we present an AR-assisted navigation system for liver resection based on a rigid stereoscopic laparoscope. The stereo image pairs from the laparoscope are used by an unsupervised convolutional network (CNN) framework to estimate depth and generate an intraoperative 3D liver surface. Meanwhile, 3D models of the patient's surgical field are segmented from preoperative CT images using V-Net architecture for volumetric image data in an end-to-end predictive style. A globally optimal iterative closest point (Go-ICP) algorithm is adopted to register the pre- and intraoperative models into a unified coordinate space; then, the preoperative 3D models are superimposed on the live laparoscopic images to provide the surgeon with detailed information about the subsurface of the patient's anatomy, including tumors, their resection margins and vessels. RESULTS: The proposed navigation system is tested on four laboratory ex vivo porcine livers and five operating theatre in vivo porcine experiments to validate its accuracy. The ex vivo and in vivo reprojection errors (RPE) are 6.04 ± 1.85 mm and 8.73 ± 2.43 mm, respectively. CONCLUSION AND SIGNIFICANCE: Both the qualitative and quantitative results indicate that our AR-assisted navigation system shows promise and has the potential to be highly useful in clinical practice.
Authors: Eric L Wisotzky; Jean-Claude Rosenthal; Ulla Wege; Anna Hilsmann; Peter Eisert; Florian C Uecker Journal: Sensors (Basel) Date: 2020-09-17 Impact factor: 3.576
Authors: C Schneider; S Thompson; J Totz; Y Song; M Allam; M H Sodergren; A E Desjardins; D Barratt; S Ourselin; K Gurusamy; D Stoyanov; M J Clarkson; D J Hawkes; B R Davidson Journal: Surg Endosc Date: 2020-08-11 Impact factor: 4.584
Authors: Thomas Wendler; Fijs W B van Leeuwen; Nassir Navab; Matthias N van Oosterom Journal: Eur J Nucl Med Mol Imaging Date: 2021-06-29 Impact factor: 9.236