Qing-Qing Zhou1, Wen Tang2, Jiashuo Wang3, Zhang-Chun Hu1, Zi-Yi Xia1, Rongguo Zhang2, Xinyi Fan2, Wei Yong4, Xindao Yin4, Bing Zhang5, Hong Zhang6. 1. Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, No. 168, gushan Road, Nanjing, 211100, Jiangsu Province, China. 2. Institute of Advanced Research, Beijing Infervision Technology Co Ltd, Yuanyang International Center, Beijing, 100025, China. 3. Research Center of Biostatistics and Computational Pharmacy, China Pharmaceutical University, No.639, Long Mian Avenue, Nanjing, 211198, Jiangsu Province, China. 4. Department of Radiology, Nanjing First Hospital, Nanjing Medical University, No. 68, Changle Road, Nanjing, 210006, China. 5. Department of Radiology, the Affiliated Nanjing Drum Tower Hospital of Nanjing University Medical School, Nanjing, 210008, China. 6. Department of Radiology, The Affiliated Jiangning Hospital of Nanjing Medical University, No. 168, gushan Road, Nanjing, 211100, Jiangsu Province, China. jnyyfsk@126.com.
Abstract
OBJECTIVE: To develop a convolutional neural network (CNN) model for the automatic detection and classification of rib fractures in actual clinical practice based on cross-modal data (clinical information and CT images). MATERIALS: In this retrospective study, CT images and clinical information (age, sex and medical history) from 1020 participants were collected and divided into a single-centre training set (n = 760; age: 55.8 ± 13.4 years; men: 500), a single-centre testing set (n = 134; age: 53.1 ± 14.3 years; men: 90), and two independent multicentre testing sets from two different hospitals (n = 62, age: 57.97 ± 11.88, men: 41; n = 64, age: 57.40 ± 13.36, men: 35). A Faster Region-based CNN (Faster R-CNN) model was applied to integrate CT images and clinical information. Then, a result merging technique was used to convert 2D inferences into 3D lesion results. The diagnostic performance was assessed on the basis of the receiver operating characteristic (ROC) curve, free-response ROC (fROC) curve, precision, recall (sensitivity), F1-score, and diagnosis time. The classification performance was evaluated in terms of the area under the ROC curve (AUC), sensitivity, and specificity. RESULTS: The CNN model showed improved performance on fresh, healing, and old fractures and yielded good classification performance for all three categories when both clinical information and CT images were used compared to the use of CT images alone. Compared with experienced radiologists, the CNN model achieved higher sensitivity (mean sensitivity: 0.95 > 0.77, 0.89 > 0.61 and 0.80 > 0.55), comparable precision (mean precision: 0.91 > 0.87, 0.84 > 0.77, and 0.95 > 0.70), and a shorter diagnosis time (average reduction of 126.15 s). CONCLUSIONS: A CNN model combining CT images and clinical information can automatically detect and classify rib fractures with good performance and feasibility in actual clinical practice. KEY POINTS: • The developed convolutional neural network (CNN) performed better in fresh, healing, and old fractures and yielded a good classification performance in three categories, if both (clinical information and CT images) were used compared to CT images alone. • The CNN model had a higher sensitivity and matched precision in three categories than experienced radiologists with a shorter diagnosis time in actual clinical practice.
OBJECTIVE: To develop a convolutional neural network (CNN) model for the automatic detection and classification of rib fractures in actual clinical practice based on cross-modal data (clinical information and CT images). MATERIALS: In this retrospective study, CT images and clinical information (age, sex and medical history) from 1020 participants were collected and divided into a single-centre training set (n = 760; age: 55.8 ± 13.4 years; men: 500), a single-centre testing set (n = 134; age: 53.1 ± 14.3 years; men: 90), and two independent multicentre testing sets from two different hospitals (n = 62, age: 57.97 ± 11.88, men: 41; n = 64, age: 57.40 ± 13.36, men: 35). A Faster Region-based CNN (Faster R-CNN) model was applied to integrate CT images and clinical information. Then, a result merging technique was used to convert 2D inferences into 3D lesion results. The diagnostic performance was assessed on the basis of the receiver operating characteristic (ROC) curve, free-response ROC (fROC) curve, precision, recall (sensitivity), F1-score, and diagnosis time. The classification performance was evaluated in terms of the area under the ROC curve (AUC), sensitivity, and specificity. RESULTS: The CNN model showed improved performance on fresh, healing, and old fractures and yielded good classification performance for all three categories when both clinical information and CT images were used compared to the use of CT images alone. Compared with experienced radiologists, the CNN model achieved higher sensitivity (mean sensitivity: 0.95 > 0.77, 0.89 > 0.61 and 0.80 > 0.55), comparable precision (mean precision: 0.91 > 0.87, 0.84 > 0.77, and 0.95 > 0.70), and a shorter diagnosis time (average reduction of 126.15 s). CONCLUSIONS: A CNN model combining CT images and clinical information can automatically detect and classify rib fractures with good performance and feasibility in actual clinical practice. KEY POINTS: • The developed convolutional neural network (CNN) performed better in fresh, healing, and old fractures and yielded a good classification performance in three categories, if both (clinical information and CT images) were used compared to CT images alone. • The CNN model had a higher sensitivity and matched precision in three categories than experienced radiologists with a shorter diagnosis time in actual clinical practice.