Literature DB >> 33894501

Visual interpretability in 3D brain tumor segmentation network.

Hira Saleem1, Ahmad Raza Shahid2, Basit Raza3.   

Abstract

Medical image segmentation is a complex yet one of the most essential tasks for diagnostic procedures such as brain tumor detection. Several 3D Convolutional Neural Network (CNN) architectures have achieved remarkable results in brain tumor segmentation. However, due to the black-box nature of CNNs, the integration of such models to make decisions about diagnosis and treatment is high-risk in the domain of healthcare. It is difficult to explain the rationale behind the model's predictions due to the lack of interpretability. Hence, the successful deployment of deep learning models in the medical domain requires accurate as well as transparent predictions. In this paper, we generate 3D visual explanations to analyze the 3D brain tumor segmentation model by extending a post-hoc interpretability technique. We explore the advantages of a gradient-free interpretability approach over gradient-based approaches. Moreover, we interpret the behavior of the segmentation model with respect to the input Magnetic Resonance Imaging (MRI) images and investigate the prediction strategy of the model. We also evaluate the interpretability methodology quantitatively for medical image segmentation tasks. To deduce that our visual explanations do not represent false information, we validate the extended methodology quantitatively. We learn that the information captured by the model is coherent with the domain knowledge of human experts, making it more trustworthy. We use the BraTS-2018 dataset to train the 3D brain tumor segmentation network and perform interpretability experiments to generate visual explanations.
Copyright © 2021 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Brain tumor segmentation; Explainable artificial intelligence; Medical imaging; Visual explanations; Visual interpretability

Year:  2021        PMID: 33894501     DOI: 10.1016/j.compbiomed.2021.104410

Source DB:  PubMed          Journal:  Comput Biol Med        ISSN: 0010-4825            Impact factor:   4.589


  3 in total

Review 1.  Explainable medical imaging AI needs human-centered design: guidelines and evidence from a systematic review.

Authors:  Haomin Chen; Catalina Gomez; Chien-Ming Huang; Mathias Unberath
Journal:  NPJ Digit Med       Date:  2022-10-19

2.  Explainable machine learning for precise fatigue crack tip detection.

Authors:  David Melching; Tobias Strohmann; Guillermo Requena; Eric Breitbarth
Journal:  Sci Rep       Date:  2022-06-09       Impact factor: 4.996

3.  Explainability of deep neural networks for MRI analysis of brain tumors.

Authors:  Ramy A Zeineldin; Mohamed E Karar; Ziad Elshaer; Jan Coburger; Christian R Wirtz; Oliver Burgert; Franziska Mathis-Ullrich
Journal:  Int J Comput Assist Radiol Surg       Date:  2022-04-23       Impact factor: 3.421

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.