Mark Ren1, Paul H Yi2,3,4. 1. The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, MD, Baltimore, USA. 2. The Russell H. Morgan Department of Radiology and Radiological Science, Johns Hopkins University School of Medicine, MD, Baltimore, USA. pyi10@jhmi.edu. 3. University of Maryland Intelligent Imaging Center, Department of Radiology, University of Maryland School of Medicine, MD, Baltimore, USA. pyi10@jhmi.edu. 4. Malone Center for Engineering in Healthcare, Johns Hopkins University, Baltimore, MD, USA. pyi10@jhmi.edu.
Abstract
OBJECTIVE: To develop and evaluate a two-stage deep convolutional neural network system that mimics a radiologist's search pattern for detecting two small fractures: triquetral avulsion fractures and Segond fractures. MATERIALS AND METHODS: We obtained 231 lateral wrist radiographs and 173 anteroposterior knee radiographs from the Stanford MURA and LERA datasets and the public domain to train and validate a two-stage deep convolutional neural network system: (1) object detectors that crop the dorsal triquetrum or lateral tibial condyle, trained on control images, followed by (2) classifiers for triquetral and Segond fractures, trained on a 1:1 case:control split. A second set of classifiers was trained on uncropped images for comparison. External test sets of 50 lateral wrist radiographs and 24 anteroposterior knee radiographs were used to evaluate generalizability. Gradient-class activation mapping was used to inspect image regions of greater importance in deciding the final classification. RESULTS: The object detectors accurately cropped the regions of interest in all validation and test images. The two-stage system achieved cross-validated area under the receiver operating characteristic curve values of 0.959 and 0.989 on triquetral and Segond fractures, compared with 0.860 (p = 0.0086) and 0.909 (p = 0.0074), respectively, for a one-stage classifier. Two-stage cross-validation accuracies were 90.8% and 92.5% for triquetral and Segond fractures, respectively. CONCLUSION: A two-stage pipeline increases accuracy in the detection of subtle fractures on radiographs compared with a one-stage classifier and generalized well to external test data. Focusing attention on specific image regions appears to improve detection of subtle findings that may otherwise be missed.
OBJECTIVE: To develop and evaluate a two-stage deep convolutional neural network system that mimics a radiologist's search pattern for detecting two small fractures: triquetral avulsion fractures and Segond fractures. MATERIALS AND METHODS: We obtained 231 lateral wrist radiographs and 173 anteroposterior knee radiographs from the Stanford MURA and LERA datasets and the public domain to train and validate a two-stage deep convolutional neural network system: (1) object detectors that crop the dorsal triquetrum or lateral tibial condyle, trained on control images, followed by (2) classifiers for triquetral and Segond fractures, trained on a 1:1 case:control split. A second set of classifiers was trained on uncropped images for comparison. External test sets of 50 lateral wrist radiographs and 24 anteroposterior knee radiographs were used to evaluate generalizability. Gradient-class activation mapping was used to inspect image regions of greater importance in deciding the final classification. RESULTS: The object detectors accurately cropped the regions of interest in all validation and test images. The two-stage system achieved cross-validated area under the receiver operating characteristic curve values of 0.959 and 0.989 on triquetral and Segond fractures, compared with 0.860 (p = 0.0086) and 0.909 (p = 0.0074), respectively, for a one-stage classifier. Two-stage cross-validation accuracies were 90.8% and 92.5% for triquetral and Segond fractures, respectively. CONCLUSION: A two-stage pipeline increases accuracy in the detection of subtle fractures on radiographs compared with a one-stage classifier and generalized well to external test data. Focusing attention on specific image regions appears to improve detection of subtle findings that may otherwise be missed.
Authors: Christian Blüthgen; Anton S Becker; Ilaria Vittoria de Martini; Andreas Meier; Katharina Martini; Thomas Frauenfelder Journal: Eur J Radiol Date: 2020-03-09 Impact factor: 3.528
Authors: Robert Lindsey; Aaron Daluiski; Sumit Chopra; Alexander Lachapelle; Michael Mozer; Serge Sicular; Douglas Hanel; Michael Gardner; Anurag Gupta; Robert Hotchkiss; Hollis Potter Journal: Proc Natl Acad Sci U S A Date: 2018-10-22 Impact factor: 11.205
Authors: Jinchi Wei; David Li; David C Sing; JaeWon Yang; Indeevar Beeram; Varun Puvanesarajah; Craig J Della Valle; Paul Tornetta; Jan Fritz; Paul H Yi Journal: Emerg Radiol Date: 2022-05-24