Martin Wagner1, Andreas Bihlmaier2, F Mathis-Ullrich2, B P Müller-Stich3, Hannes Götz Kenngott1, Patrick Mietkowski1, Paul Maria Scheikl2, Sebastian Bodenstedt4, Anja Schiepe-Tiska5, Josephin Vetter1, Felix Nickel1, S Speidel4, H Wörn2. 1. Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany. 2. Health Robotics and Automation Laboratory, Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology, Karlsruhe, Germany. 3. Department for General, Visceral and Transplantation Surgery, Heidelberg University Hospital, Im Neuenheimer Feld 420, 69120, Heidelberg, Germany. beatpeter.mueller@med.uni-heidelberg.de. 4. Department for Translational Surgical Oncology, National Center for Tumor Diseases, Partner-Site Dresden, Dresden, Germany. 5. Centre for International Student Assessment (ZIB) e.V., TUM School of Education, Technical University of Munich, Munich, Germany.
Abstract
BACKGROUND: We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. METHODS: The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon's learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. RESULTS: The duration of each operation decreased with the robot's increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. CONCLUSIONS: The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon's needs.
BACKGROUND: We demonstrate the first self-learning, context-sensitive, autonomous camera-guiding robot applicable to minimally invasive surgery. The majority of surgical robots nowadays are telemanipulators without autonomous capabilities. Autonomous systems have been developed for laparoscopic camera guidance, however following simple rules and not adapting their behavior to specific tasks, procedures, or surgeons. METHODS: The herein presented methodology allows different robot kinematics to perceive their environment, interpret it according to a knowledge base and perform context-aware actions. For training, twenty operations were conducted with human camera guidance by a single surgeon. Subsequently, we experimentally evaluated the cognitive robotic camera control. A VIKY EP system and a KUKA LWR 4 robot were trained on data from manual camera guidance after completion of the surgeon's learning curve. Second, only data from VIKY EP were used to train the LWR and finally data from training with the LWR were used to re-train the LWR. RESULTS: The duration of each operation decreased with the robot's increasing experience from 1704 s ± 244 s to 1406 s ± 112 s, and 1197 s. Camera guidance quality (good/neutral/poor) improved from 38.6/53.4/7.9 to 49.4/46.3/4.1% and 56.2/41.0/2.8%. CONCLUSIONS: The cognitive camera robot improved its performance with experience, laying the foundation for a new generation of cognitive surgical robots that adapt to a surgeon's needs.
Authors: Brady W King; Luke A Reisner; Abhilash K Pandya; Anthony M Composto; R Darin Ellis; Michael D Klein Journal: J Laparoendosc Adv Surg Tech A Date: 2013-11-06 Impact factor: 1.878
Authors: Paul J M Wijsman; Ivo A M J Broeders; Hylke J Brenkman; Amir Szold; Antonello Forgione; Henk W R Schreuder; Esther C J Consten; Werner A Draaisma; Paul M Verheijen; Jelle P Ruurda; Yuval Kaufman Journal: Surg Endosc Date: 2017-11-03 Impact factor: 4.584
Authors: Ka-Wai Kwok; Loi-Wah Sun; George P Mylonas; David R C James; Felipe Orihuela-Espina; Guang-Zhong Yang Journal: Ann Biomed Eng Date: 2012-05-12 Impact factor: 3.934
Authors: Paolo Fiorini; Ken Y Goldberg; Yunhui Liu; Russell H Taylor Journal: Proc IEEE Inst Electr Electron Eng Date: 2022-06-23 Impact factor: 14.910
Authors: Lena Maier-Hein; Matthias Eisenmann; Duygu Sarikaya; Keno März; Toby Collins; Anand Malpani; Johannes Fallert; Hubertus Feussner; Stamatia Giannarou; Pietro Mascagni; Hirenkumar Nakawala; Adrian Park; Carla Pugh; Danail Stoyanov; Swaroop S Vedula; Kevin Cleary; Gabor Fichtinger; Germain Forestier; Bernard Gibaud; Teodor Grantcharov; Makoto Hashizume; Doreen Heckmann-Nötzel; Hannes G Kenngott; Ron Kikinis; Lars Mündermann; Nassir Navab; Sinan Onogur; Tobias Roß; Raphael Sznitman; Russell H Taylor; Minu D Tizabi; Martin Wagner; Gregory D Hager; Thomas Neumuth; Nicolas Padoy; Justin Collins; Ines Gockel; Jan Goedeke; Daniel A Hashimoto; Luc Joyeux; Kyle Lam; Daniel R Leff; Amin Madani; Hani J Marcus; Ozanan Meireles; Alexander Seitel; Dogu Teber; Frank Ückert; Beat P Müller-Stich; Pierre Jannin; Stefanie Speidel Journal: Med Image Anal Date: 2021-11-18 Impact factor: 13.828
Authors: Caspar Gruijthuijsen; Luis C Garcia-Peraza-Herrera; Gianni Borghesan; Dominiek Reynaerts; Jan Deprest; Sebastien Ourselin; Tom Vercauteren; Emmanuel Vander Poorten Journal: Front Robot AI Date: 2022-04-11