| Literature DB >> 34258406 |
Colin Arrowsmith1, Reza Reiazi1,2,3, Mattea L Welch1, Michal Kazmierski2, Tirth Patel4, Aria Rezaie1, Tony Tadic1,3, Scott Bratman1,3, Benjamin Haibe-Kains1,2,5,6,7.
Abstract
BACKGROUND ANDEntities:
Keywords: Computed tomography; Deep learning; Metal artifact detection; Radiomics
Year: 2021 PMID: 34258406 PMCID: PMC8254196 DOI: 10.1016/j.phro.2021.04.001
Source DB: PubMed Journal: Phys Imaging Radiat Oncol ISSN: 2405-6316
Fig. 1The study design includes five main steps: (1) retrieval of head and neck CT imaging volume dataset and labelling of DA; (2) initial classification of DA using a sinogram-based detection (SBD) method; (3) secondary classification of SBD-classified dental artifacts using a previously trained CNN; (4) model evaluation; and (5) exploration of the effect of DA magnitude and its distance from the GTV on radiomic features.
Fig. 2An illustration of the two binary DA classifiers used in this study. (A) Two steps in the sinogram-based detection (SBD). First, one slice from a CT volume is thresholded and blurred, before being thresholded again to remove pixels in the body of the patient. The remaining pixels are thresholded again, revealing the streaks outside the patient’s body. The image is then transformed to the sinogram domain and the mean sinogram pixel intensity is computed. (B) An example of the ‘mean sinogram intensity’ for each slice in six CT volumes (each image represented with a different colour). A peak detection algorithm is applied to this plot for a given patient to detect slices likely to contain DAs. We annotate the detected slices with Xs to show that the algorithm detected one peak from each of the green and blue curves (both images labelled as ‘strong DA’). The dashed lines represent the peak detection threshold for each patient. (C) The CNN architecture used in the study. The network consisted of 5 convolutional layers (conv_1 to conv_5) creating a total of 64 filters. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)
Fig. 3Flowchart of the SBD-CNN hybrid algorithm for dental artifact detection. Images were annotated manually and then first binned using SBD (Sinogram based detection) based on the average intensity of the corresponding sinogram. Subsequently, the original images were classified using the CNN model. Images that were labelled as artifact positive by both the SBD and CNN were categorized as having strong dental artifacts. Images labelled as artifact negative by both methods were labelled as having no artifacts. This way our hybrid model is capable of labelling images based on the strength of artifact presence.
Fig. 5Performance of DA classification. (A) Distributions of how close the predicted slice index is to the labelled index for the threshold-based and sinogram based-detection methods (e.g. ). The difference in slice label between two human annotators for a set of 482 CT volumes is also shown. (B) Performance (MCC) of the DA magnitude classification techniques used in this study. The p-value of the MCC for all classifiers was <0.001). The sinogram-based detection (SBD) and convolutional neural network (CNN) are both binary classifiers. The SBD was tested on 3211 CT image volumes and the CNN binary classifier was tested on a subset of 2319 image volumes. The SBD-CNN hybrid algorithm is a three-class classifier and the three-class MCC is therefore displayed here.
Fig. 4Correlation between GTV-DA distance and feature values, based on the partial correlation using Spearman correlation. (A) Venn diagram showing the number of features with |r| > 0.55 calculated from patients from each DA class. This diagram only includes significant correlations (p < 0.05). For instance, 36 features had |r| > 0.55 and were found in patients with strong DAs (pink region), but those features had |r| < 0.55 when calculated from weak or no-DA images). Nine features had |r| > 0.55 when calculated for all three DA groups (grey region). (B) The number of features with DA-GTV distance correlation above a given cutoff, grouped by feature type. (C) These correlations grouped by filter type. (B) and (C) only include significant features (p < 0.05). (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article.)