Gurvan Lecuyer1,2, Martin Ragot3, Nicolas Martin3, Laurent Launay3, Pierre Jannin4. 1. IRT b-com, 1219 avenue des Champs Blancs, 35510, Cesson-Sevigne, France. gurvan.lecuyer@b-com.com. 2. INSERM, LTSI-UMR 1099, Univ. Rennes, 35000, Rennes, France. gurvan.lecuyer@b-com.com. 3. IRT b-com, 1219 avenue des Champs Blancs, 35510, Cesson-Sevigne, France. 4. INSERM, LTSI-UMR 1099, Univ. Rennes, 35000, Rennes, France.
Abstract
PURPOSE: Annotation of surgical videos is a time-consuming task which requires specific knowledge. In this paper, we present and evaluate a deep learning-based method that includes pre-annotation of the phases and steps in surgical videos and user assistance in the annotation process. METHODS: We propose a classification function that automatically detects errors and infers temporal coherence in predictions made by a convolutional neural network. First, we trained three different architectures of neural networks to assess the method on two surgical procedures: cholecystectomy and cataract surgery. The proposed method was then implemented in an annotation software to test its ability to assist surgical video annotation. A user study was conducted to validate our approach, in which participants had to annotate the phases and the steps of a cataract surgery video. The annotation and the completion time were recorded. RESULTS: The participants who used the assistance system were 7% more accurate on the step annotation and 10 min faster than the participants who used the manual system. The results of the questionnaire showed that the assistance system did not disturb the participants and did not complicate the task. CONCLUSION: The annotation process is a difficult and time-consuming task essential to train deep learning algorithms. In this publication, we propose a method to assist the annotation of surgical workflows which was validated through a user study. The proposed assistance system significantly improved annotation duration and accuracy.
PURPOSE: Annotation of surgical videos is a time-consuming task which requires specific knowledge. In this paper, we present and evaluate a deep learning-based method that includes pre-annotation of the phases and steps in surgical videos and user assistance in the annotation process. METHODS: We propose a classification function that automatically detects errors and infers temporal coherence in predictions made by a convolutional neural network. First, we trained three different architectures of neural networks to assess the method on two surgical procedures: cholecystectomy and cataract surgery. The proposed method was then implemented in an annotation software to test its ability to assist surgical video annotation. A user study was conducted to validate our approach, in which participants had to annotate the phases and the steps of a cataract surgery video. The annotation and the completion time were recorded. RESULTS: The participants who used the assistance system were 7% more accurate on the step annotation and 10 min faster than the participants who used the manual system. The results of the questionnaire showed that the assistance system did not disturb the participants and did not complicate the task. CONCLUSION: The annotation process is a difficult and time-consuming task essential to train deep learning algorithms. In this publication, we propose a method to assist the annotation of surgical workflows which was validated through a user study. The proposed assistance system significantly improved annotation duration and accuracy.
Authors: Andru P Twinanda; Sherif Shehata; Didier Mutter; Jacques Marescaux; Michel de Mathelin; Nicolas Padoy Journal: IEEE Trans Med Imaging Date: 2016-07-22 Impact factor: 10.048
Authors: Lena Maier-Hein; Matthias Eisenmann; Duygu Sarikaya; Keno März; Toby Collins; Anand Malpani; Johannes Fallert; Hubertus Feussner; Stamatia Giannarou; Pietro Mascagni; Hirenkumar Nakawala; Adrian Park; Carla Pugh; Danail Stoyanov; Swaroop S Vedula; Kevin Cleary; Gabor Fichtinger; Germain Forestier; Bernard Gibaud; Teodor Grantcharov; Makoto Hashizume; Doreen Heckmann-Nötzel; Hannes G Kenngott; Ron Kikinis; Lars Mündermann; Nassir Navab; Sinan Onogur; Tobias Roß; Raphael Sznitman; Russell H Taylor; Minu D Tizabi; Martin Wagner; Gregory D Hager; Thomas Neumuth; Nicolas Padoy; Justin Collins; Ines Gockel; Jan Goedeke; Daniel A Hashimoto; Luc Joyeux; Kyle Lam; Daniel R Leff; Amin Madani; Hani J Marcus; Ozanan Meireles; Alexander Seitel; Dogu Teber; Frank Ückert; Beat P Müller-Stich; Pierre Jannin; Stefanie Speidel Journal: Med Image Anal Date: 2021-11-18 Impact factor: 13.828
Authors: Laura Gutierrez; Jane Sujuan Lim; Li Lian Foo; Wei Yan Ng; Michelle Yip; Gilbert Yong San Lim; Melissa Hsing Yi Wong; Allan Fong; Mohamad Rosman; Jodhbir Singth Mehta; Haotian Lin; Darren Shu Jeng Ting; Daniel Shu Wei Ting Journal: Eye Vis (Lond) Date: 2022-01-07
Authors: M Berlet; T Vogel; D Ostler; T Czempiel; M Kähler; S Brunner; H Feussner; D Wilhelm; M Kranzfelder Journal: Int J Comput Assist Radiol Surg Date: 2022-05-28 Impact factor: 3.421
Authors: Hani J Marcus; Danyal Z Khan; Anouk Borg; Michael Buchfelder; Justin S Cetas; Justin W Collins; Neil L Dorward; Maria Fleseriu; Mark Gurnell; Mohsen Javadpour; Pamela S Jones; Chan Hee Koh; Hugo Layard Horsfall; Adam N Mamelak; Pietro Mortini; William Muirhead; Nelson M Oyesiku; Theodore H Schwartz; Saurabh Sinha; Danail Stoyanov; Luis V Syro; Georgios Tsermoulas; Adam Williams; Mark J Winder; Gabriel Zada; Edward R Laws Journal: Pituitary Date: 2021-07-06 Impact factor: 4.107