| Literature DB >> 32372200 |
Giuseppe Baselli1, Marina Codari2,3, Francesco Sardanelli4,5.
Abstract
Machine learning (ML) and deep learning (DL) systems, currently employed in medical image analysis, are data-driven models often considered as black boxes. However, improved transparency is needed to translate automated decision-making to clinical practice. To this aim, we propose a strategy to open the black box by presenting to the radiologist the annotated cases (ACs) proximal to the current case (CC), making decision rationale and uncertainty more explicit. The ACs, used for training, validation, and testing in supervised methods and for validation and testing in the unsupervised ones, could be provided as support of the ML/DL tool. If the CC is localised in a classification space and proximal ACs are selected by proper metrics, the latter ones could be shown in their original form of images, enriched with annotation to radiologists, thus allowing immediate interpretation of the CC classification. Moreover, the density of ACs in the CC neighbourhood, their image saliency maps, classification confidence, demographics, and clinical information would be available to radiologists. Thus, encrypted information could be transmitted to radiologists, who will know model output (what) and salient image regions (where) enriched by ACs, providing classification rationale (why). Summarising, if a classifier is data-driven, let us make its interpretation data-driven too.Entities:
Keywords: Artificial intelligence; Decision making (computer-assisted); Diagnosis; Machine learning; Radiology
Mesh:
Year: 2020 PMID: 32372200 PMCID: PMC7200961 DOI: 10.1186/s41747-020-00159-0
Source DB: PubMed Journal: Eur Radiol Exp ISSN: 2509-9280
Fig. 1Breast arterial calcifications (BAC) detection by convolutional neural network (CNN). a Original image (positive to BAC presence). b Detail including the unsegmented BAC (white arrow). c Heat map provided by the CNN. d Annotated image (BAC in yellow). The heat map (c) has the reduced resolution of images input to the CNN
Fig. 2The L and L-1 layers of a deep neural network
Fig. 3The current case (red triangle) is positioned in the output feature space of L-1 layer. A neighbour region (red dashed circle) is fixed and the included training/validation annotated cases (red circles) are considered to provide reference images, classification confidence and ancillary information. Other cases outside the neighbourhood are represented as grey circles
Fig. 4Possible instances of location of the current case (CC) in the feature space. a The current case (red triangle) falls into a region crowded with annotated cases (ACs), supposed to be equally classified with high confidence (red circles). b The CC falls into an uninhabited region, which would highlight a lack of training or validation similar cases. c The CC falls into a crowded region, yet with different classifications of ACs (red and orange circles), most likely with relatively low confidence
Fig. 5Schemes of the diagnostic process aided by machine learning tools to show process differences with (a) and without (b) communication barriers. The second option allows the clinician to retrieve information about classification results (what), object localisation (where), and added information on the decision-making process (why) derived from the annotated library