| Literature DB >> 35422980 |
Sania Shamim1, Mazhar Javed Awan1, Azlan Mohd Zain2, Usman Naseem3, Mazin Abed Mohammed4, Begonya Garcia-Zapirain5.
Abstract
The coronavirus (COVID-19) pandemic has had a terrible impact on human lives globally, with far-reaching consequences for the health and well-being of many people around the world. Statistically, 305.9 million people worldwide tested positive for COVID-19, and 5.48 million people died due to COVID-19 up to 10 January 2022. CT scans can be used as an alternative to time-consuming RT-PCR testing for COVID-19. This research work proposes a segmentation approach to identifying ground glass opacity or ROI in CT images developed by coronavirus, with a modified structure of the Unet model having been used to classify the region of interest at the pixel level. The problem with segmentation is that the GGO often appears indistinguishable from a healthy lung in the initial stages of COVID-19, and so, to cope with this, the increased set of weights in contracting and expanding the Unet path and an improved convolutional module is added in order to establish the connection between the encoder and decoder pipeline. This has a major capacity to segment the GGO in the case of COVID-19, with the proposed model being referred to as "convUnet." The experiment was performed on the Medseg1 dataset, and the addition of a set of weights at each layer of the model and modification in the connected module in Unet led to an improvement in overall segmentation results. The quantitative results obtained using accuracy, recall, precision, dice-coefficient, F1score, and IOU were 93.29%, 93.01%, 93.67%, 92.46%, 93.34%, 86.96%, respectively, which is better than that obtained using Unet and other state-of-the-art models. Therefore, this segmentation approach proved to be more accurate, fast, and reliable in helping doctors to diagnose COVID-19 quickly and efficiently.Entities:
Mesh:
Year: 2022 PMID: 35422980 PMCID: PMC9002904 DOI: 10.1155/2022/6566982
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1The number of each mask class segmentation in the dataset.
Figure 2Sample dataset images.
Figure 3Architecture of proposed network.
Figure 4Encoder flow diagram.
Figure 5Encoder–decoder and concatenated skip connection flow.
Figure 6Encoder-decoder connected layer block.
Conventional Unet and improved model convUnet parameters.
| Method | Conv-layers | E-D module weight | Batch normalization | Total parameters | Trainable parameters | Optimizer | Learning rate |
|---|---|---|---|---|---|---|---|
| Unet | 16 | 2 | No | 31,030,788 | 31,030,788 | Adam | 1 |
| convUnet | 24 | 3 | Yes | 46,773,124 | 46,755,460 | Adam | 1 |
Figure 7Actual ground truth of ground glass opacity and predicted ground glass opacity segmentation.
Figure 8Our segmentation model performance.
Best results obtained by proposed model and Unet model over 100 epochs
| Method | IoU | Accuracy | Dice-coefficient |
| Recall | Precision |
|---|---|---|---|---|---|---|
| Unet | 82.83 | 91.78 | 90.43 | 91.82 | 91.33 | 92.31 |
| convUnet average value | 76.47 | 83.27 | 82.52 | 83.43 | 82.75 | 84.11 |
| convUnet | 86.96 | 93.29 | 92.6 | 93.34 | 93.01 | 93.67 |
| Improvement in convUnet |
|
|
|
|
|
|
Figure 9Box plot results.
Figure 10((a)–(h)) training and testing performance.
State-of-the-art comparison in terms of IoU, dice, recall, F1 score, precision, and accuracy.
| Source | Models | Acc | IoU | Dice | Recall |
| Precision |
|---|---|---|---|---|---|---|---|
| [ | 3D Unet | — | — | 61.0 | 62.8 | 74.1 | |
| [ | Encoder-decoder method | — | — | 78.6 | 71.1 | 78.4 | 85.6 |
| [ | AU-Net + FTL | — | — | 69.1 | 81.1 | — | — |
| [ | Multiple deep CNN | 95.23 | — | 88.0 | 90.2 | — | — |
| [ | Imagenet, VGG16 FCN8 | — | 60.0 | 75.0 | 92.0 | — | 63.0 |
| [ | DDANet | — | — | 77.89 | 88.40 | — | — |
| [ | ADID-Unet | 97.01 | — | 80.31 | 79.73 | 82.00 | 84.0 |
| [ | Semi-Inf-Net | — | — | 73.01 | 72.00 | — | — |
| [ | Unet | 91.78 | 82.83 | 90.43 | 91.33 | 91.82 | 92.31 |
| Ours | 93.29 | 86.96 | 92.46 | 93.01 | 93.34 | 93.67 |