Literature DB >> 30468970

CATARACTS: Challenge on automatic tool annotation for cataRACT surgery.

Hassan Al Hajj1, Mathieu Lamard2, Pierre-Henri Conze3, Soumali Roychowdhury4, Xiaowei Hu5, Gabija Maršalkaitė6, Odysseas Zisimopoulos7, Muneer Ahmad Dedmari8, Fenqiang Zhao9, Jonas Prellberg10, Manish Sahu11, Adrian Galdran12, Teresa Araújo13, Duc My Vo14, Chandan Panda15, Navdeep Dahiya16, Satoshi Kondo17, Zhengbing Bian4, Arash Vahdat4, Jonas Bialopetravičius6, Evangello Flouty7, Chenhui Qiu9, Sabrina Dill11, Anirban Mukhopadhyay18, Pedro Costa12, Guilherme Aresta13, Senthil Ramamurthy16, Sang-Woong Lee14, Aurélio Campilho13, Stefan Zachow11, Shunren Xia9, Sailesh Conjeti19, Danail Stoyanov20, Jogundas Armaitis6, Pheng-Ann Heng5, William G Macready4, Béatrice Cochener21, Gwenolé Quellec22.   

Abstract

Surgical tool detection is attracting increasing attention from the medical image analysis community. The goal generally is not to precisely locate tools in images, but rather to indicate which tools are being used by the surgeon at each instant. The main motivation for annotating tool usage is to design efficient solutions for surgical workflow analysis, with potential applications in report generation, surgical training and even real-time decision support. Most existing tool annotation algorithms focus on laparoscopic surgeries. However, with 19 million interventions per year, the most common surgical procedure in the world is cataract surgery. The CATARACTS challenge was organized in 2017 to evaluate tool annotation algorithms in the specific context of cataract surgery. It relies on more than nine hours of videos, from 50 cataract surgeries, in which the presence of 21 surgical tools was manually annotated by two experts. With 14 participating teams, this challenge can be considered a success. As might be expected, the submitted solutions are based on deep learning. This paper thoroughly evaluates these solutions: in particular, the quality of their annotations are compared to that of human interpretations. Next, lessons learnt from the differential analysis of these solutions are discussed. We expect that they will guide the design of efficient surgery monitoring tools in the near future.
Copyright © 2018 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Cataract surgery; Challenge; Deep learning; Video analysis

Year:  2018        PMID: 30468970     DOI: 10.1016/j.media.2018.11.008

Source DB:  PubMed          Journal:  Med Image Anal        ISSN: 1361-8415            Impact factor:   8.545


  4 in total

1.  Assisted phase and step annotation for surgical videos.

Authors:  Gurvan Lecuyer; Martin Ragot; Nicolas Martin; Laurent Launay; Pierre Jannin
Journal:  Int J Comput Assist Radiol Surg       Date:  2020-02-10       Impact factor: 2.924

2.  Evaluation of Artificial Intelligence-Based Intraoperative Guidance Tools for Phacoemulsification Cataract Surgery.

Authors:  Rogerio Garcia Nespolo; Darvin Yi; Emily Cole; Nita Valikodath; Cristian Luciano; Yannek I Leiderman
Journal:  JAMA Ophthalmol       Date:  2022-02-01       Impact factor: 7.389

3.  SERV-CT: A disparity dataset from cone-beam CT for validation of endoscopic 3D reconstruction.

Authors:  P J Eddie Edwards; Dimitris Psychogyios; Stefanie Speidel; Lena Maier-Hein; Danail Stoyanov
Journal:  Med Image Anal       Date:  2021-11-06       Impact factor: 8.545

4.  Analysis of Cataract Surgery Instrument Identification Performance of Convolutional and Recurrent Neural Network Ensembles Leveraging BigCat.

Authors:  Nicholas Matton; Adel Qalieh; Yibing Zhang; Anvesh Annadanam; Alexa Thibodeau; Tingyang Li; Anand Shankar; Stephen Armenti; Shahzad I Mian; Bradford Tannen; Nambi Nallasamy
Journal:  Transl Vis Sci Technol       Date:  2022-04-01       Impact factor: 3.283

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.