| Literature DB >> 35371333 |
Azrin Khan1,2, Rachael Garner1, Marianna La Rocca1,3, Sana Salehi1, Dominique Duncan1.
Abstract
Since December 2019, the novel coronavirus disease 2019 (COVID-19) has claimed the lives of more than 3.75 million people worldwide. Consequently, methods for accurate COVID-19 diagnosis and classification are necessary to facilitate rapid patient care and terminate viral spread. Lung infection segmentations are useful to identify unique infection patterns that may support rapid diagnosis, severity assessment, and patient prognosis prediction, but manual segmentations are time-consuming and depend on radiologic expertise. Deep learning-based methods have been explored to reduce the burdens of segmentation; however, their accuracies are limited due to the lack of large, publicly available annotated datasets that are required to establish ground truths. For these reasons, we propose a semi-automatic, threshold-based segmentation method to generate region of interest (ROI) segmentations of infection visible on lung computed tomography (CT) scans. Infection masks are then used to calculate the percentage of lung abnormality (PLA) to determine COVID-19 severity and to analyze the disease progression in follow-up CTs. Compared with other COVID-19 ROI segmentation methods, on average, the proposed method achieved improved precision ( 47.49 % ) and specificity ( 98.40 % ) scores. Furthermore, the proposed method generated PLAs with a difference of ± 3.89 % from the ground-truth PLAs. The improved ROI segmentation results suggest that the proposed method has potential to assist radiologists in assessing infection severity and analyzing disease progression in follow-up CTs.Entities:
Year: 2022 PMID: 35371333 PMCID: PMC8958480 DOI: 10.1007/s11760-022-02183-6
Source DB: PubMed Journal: Signal Image Video Process ISSN: 1863-1703 Impact factor: 2.157
Fig. 1Diagram of the pipeline: first a preprocessed lung segmentation of a CT image is inputted, and then, Yen thresholding is implemented on the preprocessed input image. After that the lung mask of the CT image is used to generate the percentage of lung abnormality (PLA). If the PLA is then IJ Isodata thresholding is implemented on the lung segmentation image, else if PLA is less than the Yen thresholded binary image is used for next steps. Then the Region Adjacency Graph method and gray-level binary thresholding are implemented to finally generate the ROI segmentation
Comparison of thresholding methods using ImageJ
| Image type | Method | Threshold value | PLA ( |
|---|---|---|---|
| Image type I | Huang | 6 | 65.73 |
| Ridler | 87 | 1.95 | |
| Li | 14 | 65.73 | |
| Kapur | 97 | 1.606 | |
| Otsu | 87 | 1.95 | |
| Yen | 86 | 1.53 | |
| IJ Isodata | 117 | 2.04 | |
| Image type II | Huang | 11 | 28.35 |
| Ridler | 51 | 28.35 | |
| Li | 13 | 28.35 | |
| Kapur | 162 | 28.35 | |
| Otsu | 51 | 28.35 | |
| Yen | 1 | 28.35 | |
| IJ Isodata | 90 | 3.24 |
Fig. 2Illustration of the performance of seven thresholding methods on image types I and II with threshold-based binary images in the first row and ROI segmentation images in the second row. On the far right column is the corresponding ground truth ROI segmentation
Evaluation metrics ()
| Methods | Evaluation metrics( | ||||
|---|---|---|---|---|---|
| Dice | MCC | Prec. | Sen. | Spec. | |
| Inf-Net | 37.5 | 38.32 | 37.85 | 43.61 | 97.09 |
| Semi-Inf-Net | 40.01 | 75.99 | 42.47 | 41.19 | 97.88 |
| MiniSeg | 39.58 | 40.34 | 36.80 | 51.25 | 96.00 |
| nCoVSegNet | 30.30 | 32.57 | 36.91 | 31.20 | 98.37 |
| Oulefki | 48.16 | 51.59 | 46.90 | 55.25 | 98.27 |
| Our Method | 46.28 | 47.53 | 47.49 | 53.13 | 98.4 |
Fig. 3Comparison of ROI segmentation results
Fig. 4Graph with percentage of lung abnormality computed with ground truth segmentation images (blue) and segmentation images generated using the proposed method (orange) (Color figure online)
Fig. 5Display of three representative patients from the dataset. The first column shows a lung CT image and the second column shows the lung segmentations used to generate the ROI segmentations in the third column