Literature DB >> 35484439

Using Occlusion-Based Saliency Maps to Explain an Artificial Intelligence Tool in Lung Cancer Screening: Agreement Between Radiologists, Labels, and Visual Prompts.

Ziba Gandomkar1, Pek Lan Khong2, Amanda Punch1, Sarah Lewis3.   

Abstract

Occlusion-based saliency maps (OBSMs) are one of the approaches for interpreting decision-making process of an artificial intelligence (AI) system. This study explores the agreement among text responses from a cohort of radiologists to describe diagnostically relevant areas on low-dose CT (LDCT) images. It also explores if radiologists' descriptions of cases misclassified by the AI provide a rationale for ruling out the AI's output. The OBSM indicating the importance of different pixels on the final decision made by an AI were generated for 10 benign cases (3 misclassified by the AI tool as malignant) and 10 malignant cases (2 misclassified by the AI tool as benign). Thirty-six radiologists were asked to use radiological vocabulary, typical to reporting LDCT scans, to describe the mapped regions of interest (ROI). The radiologists' annotations were then grouped by using a clustering-based technique. Topics were extracted from the annotations and for each ROI, a percentage of annotations containing each topic were found. Radiologists annotated 17 and 24 unique ROIs on benign and malignant cases, respectively. Agreement on the main label (e.g., "vessel," "nodule") by radiologists was only seen in only in 12% of all areas (5/41 ROI). Topic analyses identified six descriptors which are commonly associated with a lower malignancy likelihood. Eight common topics related to a higher malignancy likelihood were also determined. Occlusion-based saliency maps were used to explain an AI decision-making process to radiologists, who in turn have provided insight into the level of agreement between the AI's decision and radiological lexicon.
© 2022. The Author(s).

Entities:  

Keywords:  Artificial intelligence; Lung computed tomography; Occlusion-based saliency maps; Radiologists

Mesh:

Year:  2022        PMID: 35484439      PMCID: PMC9582174          DOI: 10.1007/s10278-022-00631-w

Source DB:  PubMed          Journal:  J Digit Imaging        ISSN: 0897-1889            Impact factor:   4.903


  26 in total

1.  Visually interpretable deep network for diagnosis of breast masses on mammograms.

Authors:  Seong Tae Kim; Jae-Hyeok Lee; Hakmin Lee; Yong Man Ro
Journal:  Phys Med Biol       Date:  2018-12-04       Impact factor: 3.609

2.  Unsupervised lesion detection via image restoration with a normative prior.

Authors:  Xiaoran Chen; Suhang You; Kerem Can Tezcan; Ender Konukoglu
Journal:  Med Image Anal       Date:  2020-05-01       Impact factor: 8.545

3.  Evaluating the Visualization of What a Deep Neural Network Has Learned.

Authors:  Wojciech Samek; Alexander Binder; Gregoire Montavon; Sebastian Lapuschkin; Klaus-Robert Muller
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2017-11       Impact factor: 10.451

4.  Usefulness of morphological characteristics for the differentiation of benign from malignant solitary pulmonary lesions using HRCT.

Authors:  M D Seemann; A Staebler; T Beinert; H Dienemann; B Obst; M Matzko; C Pistitsch; M F Reiser
Journal:  Eur Radiol       Date:  1999       Impact factor: 5.315

5.  Sparse Autoencoder for Unsupervised Nucleus Detection and Representation in Histopathology Images.

Authors:  Le Hou; Vu Nguyen; Ariel B Kanevsky; Dimitris Samaras; Tahsin M Kurc; Tianhao Zhao; Rajarsi R Gupta; Yi Gao; Wenjin Chen; David Foran; Joel H Saltz
Journal:  Pattern Recognit       Date:  2018-09-13       Impact factor: 7.740

6.  New classification of small pulmonary nodules by margin characteristics on high-resolution CT.

Authors:  K Furuya; S Murayama; H Soeda; J Murakami; Y Ichinose; H Yabuuchi; Y Katsuda; M Koga; K Masuda
Journal:  Acta Radiol       Date:  1999-09       Impact factor: 1.990

7.  The "laboratory" effect: comparing radiologists' performance and variability during prospective clinical and laboratory mammography interpretations.

Authors:  David Gur; Andriy I Bandos; Cathy S Cohen; Christiane M Hakim; Lara A Hardesty; Marie A Ganott; Ronald L Perrin; William R Poller; Ratan Shah; Jules H Sumkin; Luisa P Wallace; Howard E Rockette
Journal:  Radiology       Date:  2008-08-05       Impact factor: 11.105

8.  Identifying Medical Diagnoses and Treatable Diseases by Image-Based Deep Learning.

Authors:  Daniel S Kermany; Michael Goldbaum; Wenjia Cai; Carolina C S Valentim; Huiying Liang; Sally L Baxter; Alex McKeown; Ge Yang; Xiaokang Wu; Fangbing Yan; Justin Dong; Made K Prasadha; Jacqueline Pei; Magdalene Y L Ting; Jie Zhu; Christina Li; Sierra Hewett; Jason Dong; Ian Ziyar; Alexander Shi; Runze Zhang; Lianghong Zheng; Rui Hou; William Shi; Xin Fu; Yaou Duan; Viet A N Huu; Cindy Wen; Edward D Zhang; Charlotte L Zhang; Oulan Li; Xiaobo Wang; Michael A Singer; Xiaodong Sun; Jie Xu; Ali Tafreshi; M Anthony Lewis; Huimin Xia; Kang Zhang
Journal:  Cell       Date:  2018-02-22       Impact factor: 41.582

9.  Statistical considerations for testing an AI algorithm used for prescreening lung CT images.

Authors:  Nancy A Obuchowski; Jennifer A Bullen
Journal:  Contemp Clin Trials Commun       Date:  2019-08-22

10.  Detection and classification of COVID-19 disease from X-ray images using convolutional neural networks and histogram of oriented gradients.

Authors:  Aleka Melese Ayalew; Ayodeji Olalekan Salau; Bekalu Tadele Abeje; Belay Enyew
Journal:  Biomed Signal Process Control       Date:  2022-01-26       Impact factor: 3.880

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.