Hidemasa Takao1, Shiori Amemiya2, Shimpei Kato2, Hiroshi Yamashita3, Naoya Sakamoto2, Osamu Abe2. 1. Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan. takaoh-tky@umin.ac.jp. 2. Department of Radiology, Graduate School of Medicine, University of Tokyo, 7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8655, Japan. 3. Department of Radiology, Teikyo University Hospital, Mizonokuchi, 5-1-1 Futago, Takatsu-ku, Kawasaki, Kanagawa, 213-8507, Japan.
Abstract
PURPOSE: This study aims to develop a 2.5-dimensional (2.5D) deep-learning, object detection model for the automated detection of brain metastases, into which three consecutive slices were fed as the input for the prediction in the central slice, and to compare its performance with that of an ordinary 2-dimensional (2D) model. METHODS: We analyzed 696 brain metastases on 127 contrast-enhanced computed tomography (CT) scans from 127 patients with brain metastases. The scans were randomly divided into training (n = 79), validation (n = 18), and test (n = 30) datasets. Single-shot detector (SSD) models with a feature fusion module were constructed, trained, and compared using the lesion-based sensitivity, positive predictive value (PPV), and the number of false positives per patient at a confidence threshold of 50%. RESULTS: The 2.5D SSD model had a significantly higher PPV (t test, p < 0.001) and a significantly smaller number of false positives (t test, p < 0.001). The sensitivities of the 2D and 2.5D models were 88.1% (95% confidence interval [CI], 86.6-89.6%) and 88.7% (95% CI, 87.3-90.1%), respectively. The corresponding PPVs were 39.0% (95% CI, 36.5-41.4%) and 58.9% (95% CI, 55.2-62.7%), respectively. The numbers of false positives per patient were 11.9 (95% CI, 10.7-13.2) and 4.9 (95% CI, 4.2-5.7), respectively. CONCLUSION: Our results indicate that 2.5D deep-learning, object detection models, which use information about the continuity between adjacent slices, may reduce false positives and improve the performance of automated detection of brain metastases compared with ordinary 2D models.
PURPOSE: This study aims to develop a 2.5-dimensional (2.5D) deep-learning, object detection model for the automated detection of brain metastases, into which three consecutive slices were fed as the input for the prediction in the central slice, and to compare its performance with that of an ordinary 2-dimensional (2D) model. METHODS: We analyzed 696 brain metastases on 127 contrast-enhanced computed tomography (CT) scans from 127 patients with brain metastases. The scans were randomly divided into training (n = 79), validation (n = 18), and test (n = 30) datasets. Single-shot detector (SSD) models with a feature fusion module were constructed, trained, and compared using the lesion-based sensitivity, positive predictive value (PPV), and the number of false positives per patient at a confidence threshold of 50%. RESULTS: The 2.5D SSD model had a significantly higher PPV (t test, p < 0.001) and a significantly smaller number of false positives (t test, p < 0.001). The sensitivities of the 2D and 2.5D models were 88.1% (95% confidence interval [CI], 86.6-89.6%) and 88.7% (95% CI, 87.3-90.1%), respectively. The corresponding PPVs were 39.0% (95% CI, 36.5-41.4%) and 58.9% (95% CI, 55.2-62.7%), respectively. The numbers of false positives per patient were 11.9 (95% CI, 10.7-13.2) and 4.9 (95% CI, 4.2-5.7), respectively. CONCLUSION: Our results indicate that 2.5D deep-learning, object detection models, which use information about the continuity between adjacent slices, may reduce false positives and improve the performance of automated detection of brain metastases compared with ordinary 2D models.
Authors: Elizabeth A Sadowski; Atul B Shinagare; Hyesun Park; Olga R Brook; Rosemarie Forstner; Sumer K Wallace; Jeanne M Horowitz; Neil Horowitz; Marcia Javitt; Priyanka Jha; Aki Kido; Yulia Lakhman; Susanna I Lee; Lucia Manganaro; Katherine E Maturen; Stephanie Nougaret; Liina Poder; Gaiane M Rauch; Caroline Reinhold; Evis Sala; Isabelle Thomassin-Naggara; Herbert Alberto Vargas; Aradhana Venkatesan; Olivera Nikolic; Andrea G Rockall Journal: Eur Radiol Date: 2021-11-30 Impact factor: 7.034