| Literature DB >> 35746310 |
Yu Jin Seol1, So Hyun Park2, Young Jae Kim3, Young-Taek Park4, Hee Young Lee2, Kwang Gi Kim3,5.
Abstract
This paper proposes a development of automatic rib sequence labeling systems on chest computed tomography (CT) images with two suggested methods and three-dimensional (3D) region growing. In clinical practice, radiologists usually define anatomical terms of location depending on the rib's number. Thus, with the manual process of labeling 12 pairs of ribs and counting their sequence, it is necessary to refer to the annotations every time the radiologists read chest CT. However, the process is tedious, repetitive, and time-consuming as the demand for chest CT-based medical readings has increased. To handle the task efficiently, we proposed an automatic rib sequence labeling system and implemented comparison analysis on two methods. With 50 collected chest CT images, we implemented intensity-based image processing (IIP) and a convolutional neural network (CNN) for rib segmentation on this system. Additionally, three-dimensional (3D) region growing was used to classify each rib's label and put in a sequence label. The IIP-based method reported a 92.0% and the CNN-based method reported a 98.0% success rate, which is the rate of labeling appropriate rib sequences over whole pairs (1st to 12th) for all slices. We hope for the applicability thereof in clinical diagnostic environments by this method-efficient automatic rib sequence labeling system.Entities:
Keywords: artificial intelligence; image processing; ribs; three-dimensional region growing
Mesh:
Year: 2022 PMID: 35746310 PMCID: PMC9230858 DOI: 10.3390/s22124530
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Flowchart of rib sequence labeling processes using IIP-based method and CNN-based method; (IIP—intensity-based image processing; CNN—convolutional neural network).
Figure 2Results of image processing in the intensity-based image processing (IIP)-based method.
Figure 3The architecture of U-net.
Figure 4The principle of 3D region growing using 6-neighborhood.
Mean values of U-net performance in reconstructing the rib regions of interest.
| U-Net (CNN) | |||||
|---|---|---|---|---|---|
| Recall (%) | Precision (%) | Specificity (%) | Accuracy (%) | DSC | |
| Average | 91.99 | 90.61 | 98.33 | 97.91 | 0.89 |
| (95% CI) | (90.83–93.15) | (89.26–91.96) | (97.87–98.79) | (97.42–98.40) | (0.87–0.91) |
| Min | 90.23 | 88.91 | 97.88 | 97.44 | 0.87 |
| Max | 93.31 | 91.87 | 98.91 | 98.82 | 0.92 |
Comparison of the sequence labeling systems based on IIP and CNN in terms of successful labeling rate; (p-value, a probability value).
| Labeling | |||
|---|---|---|---|
| No. (Out of 50 cases in total) | Successful sequence labeling rate on all ribs (%) | ||
| IIP-based | 46/50 | 92.0% | |
| CNN-based | 49/50 | 98.0% | |
Figure 5Results of sequence labeling; annotation for sequence labels with coloring (top) and boxes (bottom), including nearby numbers (1–12) labels indicating the order of the ribs from the top of the upper body (CNN, convolutional neural network).
Figure 6Verified results of sequence labeling on 3D-rendering rib models with colored annotations.
Figure 7Cause of errors in intensity-based image processing (IIP)-based and convolutional neural network (CNN)-based methods.