| Literature DB >> 31073816 |
Varun Singh1,2, Varun Danda3, Richard Gorniak3, Adam Flanders3, Paras Lakhani3.
Abstract
Assess the efficacy of deep convolutional neural networks (DCNNs) in detection of critical enteric feeding tube malpositions on radiographs. 5475 de-identified HIPAA compliant frontal view chest and abdominal radiographs were obtained, consisting of 174 x-rays of bronchial insertions and 5301 non-critical radiographs, including normal course, normal chest, and normal abdominal x-rays. The ground-truth classification for enteric feeding tube placement was performed by two board-certified radiologists. Untrained and pretrained deep convolutional neural network models for Inception V3, ResNet50, and DenseNet 121 were each employed. The radiographs were fed into each deep convolutional neural network, which included untrained and pretrained models. The Tensorflow framework was used for Inception V3, ResNet50, and DenseNet. Images were split into training (4745), validation (630), and test (100). Both real-time and preprocessing image augmentation strategies were performed. Receiver operating characteristic (ROC) and area under the curve (AUC) on the test data were used to assess the models. Statistical differences among the AUCs were obtained. p < 0.05 was considered statistically significant. The pretrained Inception V3, which had an AUC of 0.87 (95 CI; 0.80-0.94), performed statistically significantly better (p < .001) than the untrained Inception V3, with an AUC of 0.60 (95 CI; 0.52-0.68). The pretrained Inception V3 also had the highest AUC overall, as compared with ResNet50 and DenseNet121, with AUC values ranging from 0.82 to 0.85. Each pretrained network outperformed its untrained counterpart. (p < 0.05). Deep learning demonstrates promise in differentiating critical vs. non-critical placement with an AUC of 0.87. Pretrained networks outperformed untrained ones in all cases. DCNNs may allow for more rapid identification and communication of critical feeding tube malpositions.Entities:
Keywords: Artificial ntelligence; Chest radiography; Deep learning; Machine learning
Mesh:
Year: 2019 PMID: 31073816 PMCID: PMC6646608 DOI: 10.1007/s10278-019-00229-9
Source DB: PubMed Journal: J Digit Imaging ISSN: 0897-1889 Impact factor: 4.056
Results
| Network | Naive AUC | Pretrained AUC | Significance | Sensitivity (pretrained) | Specificity (pretrained) |
|---|---|---|---|---|---|
| Inception V3 | 0.60 (0.52–0.68) | 0.87 (0.80–0.94) | 88 (76–95) | 76 (62–87) | |
| ResNet50 | 0.60 (0.48–0.71) | 0.82 (0.75–0.89) | 100 (93–100) | 62 (47–75) | |
| DenseNet121 | 0.51 (0.45–0.58) | 0.85 (0.77–0.92) | 92 (81–98) | 74 (60–85) |
Numbers in the parenthesis represent the 95% confidence interval
Fig. 1Left bronchial insertion (left) and right bronchial insertion (right)
Fig. 2Tube normal courses with tip out of view (left) and duodenum (right)
Fig. 3Tube placement in stomach (left) and esophagus (right)
Fig. 4Class activation maps (CAMs) of correct class predictions. a Left bronchus. b Tip out of view. c Duodenum
Fig. 5Class activation maps (CAMs) of incorrect class predictions. a Right bronchus. b Tip out of view