| Literature DB >> 35607427 |
Abstract
Accurate lung tumor identification is crucial for radiation treatment planning. Due to the low contrast of the lung tumor in computed tomography (CT) images, segmentation of the tumor in CT images is challenging. This paper effectively integrates the U-Net with the channel attention module (CAM) to segment the malignant lung area from the surrounding chest region. The SegChaNet method encodes CT slices of the input lung into feature maps utilizing the trail of encoders. Finally, we explicitly developed a multiscale, dense-feature extraction module to extract multiscale features from the collection of encoded feature maps. We have identified the segmentation map of the lungs by employing the decoders and compared SegChaNet with the state-of-the-art. The model has learned the dense-feature extraction in lung abnormalities, while iterative downsampling followed by iterative upsampling causes the network to remain invariant to the size of the dense abnormality. Experimental results show that the proposed method is accurate and efficient and directly provides explicit lung regions in complex circumstances without postprocessing.Entities:
Year: 2022 PMID: 35607427 PMCID: PMC9124150 DOI: 10.1155/2022/1139587
Source DB: PubMed Journal: Appl Bionics Biomech ISSN: 1176-2322 Impact factor: 1.664
Figure 1(a) Lung cancer nodule view. (b) An arrow indicates this 1.25 mm thick CT slice to contain approximately 2 mm long lung nodules.
Lung cancer: causes and prevention.
| (i) Detection of lung cancer is generally difficult because specialists cannot find the infected area until it reaches the next stage. As a result, the chance of survival for lung cancer, in 54% of detected cancers, not in the advanced stages, with early intervention, is only 4% [ |
|
|
| (ii) The probability of increasing lung cancer diagnoses due to the number of cigarettes consumed and sometimes after drinking is proportional. As a result of harmful habits, a minor case of lung cancer may occur even in individuals without disease risk. |
|
|
| (iii) X-ray, CT, or MRI scans are performed to examine lung cancer and differentiate abnormal lung development. The best technique is CT, which experts can overlook when not in ML. |
Comparison to the literature works.
| References | Datasets | Method | Result (%) |
|---|---|---|---|
| Chaturvedi et al. 2019 [ | LUNA 16 | 3D DL DÖ, V-Net architecture | Sensitivity: 96.5 |
| Chapaliuk et al., 2019 [ | ACDC LUNGH | VGG16, ResNet50, and CNN | Sensitivity: 97.9 |
| Petrellis et al., 2018 [ | UCI | Gaussian blur, Otsu thresholding | Sensitivity: 87 |
| Yuan et al., 2019 [ | 134 BT Shandong hospital | Watershed transform | Sensitivity: 88.8 |
| Cao et al., 2016 [ | LUNA16 | 3D and 2D CNN | Precision: 87 |
| Xie et al., 2019 [ | LUNA16 | 2D CNN and RCNN | AUC: 95.4 |
| Sun et al., 2017 [ | LIDC-IDRI | CNN, deep belief network, and Boltzmann machine | Sensitivity: 82.2 |
| Huang et al., 2018 [ | LIDC-IDRI | CNN, extreme learning machine, and deep transfer | Sensitivity: 91.6 |
| Pehrson et al. (2021) [ | LIDC-IDRI | This study was aimed at developing a DL-based automated lung cancer tumor segmentation network utilizing CT scans | Sensitivity: 91.7 |
| Sharma et al., 2011 [ | LIDC-IDRI | Diagnostic indicators | Accuracy: 80.1 |
| Akram et al., 2012 [ | LIDC-IDRI | Neurofuzzy | Accuracy: 95.5 |
| Paulin et al. 2011 [ | LIDC-IDRI | For MLP SVM training, the back propagation technique is employed | Accuracy: 83.6 |
| JIA et al. 2007 [ | NCA | For MLP SVM training, the back propagation technique is employed | Accuracy: 92.4 |
Some distinct sorts of CT scan slices in the dataset.
| Data | Number of patients | Tumor | Without tumor | Notes | |
|---|---|---|---|---|---|
| Train | 370 | 14,848 | 20,840 | 35,688 | |
| Test | 90 | 4520 | 4740 | 9260 | |
| Validation | 50 | 850 | 900 | 1750 | |
| Total | 510 | 20,218 | 26,480 | 46,698 |
Hyperparameters used for training SegChaNet.
| Exp. | SegChaNet | |
|---|---|---|
| ILR | Minibatch | |
| 1 | 1e−4 | 2 |
| 2 | 1e−4 | 8 |
| 3 | 1e−4 | 12 |
| 4 | 1e−3 | 2 |
| 5 | 1e−3 | 4 |
| 6 | 1e−3 | 8 |
| 7 | 3e−3 | 4 |
| 8 | 3e−3 | 12 |
| 9 | 3e−3 | 8 |
Figure 2U-Net architecture.
Figure 3V-Net architecture.
Figure 4An encoder-stage-to-decoder stage residual connection network architecture. The encoder uses residual connections and 3D max pooling operations.
U-Net method with/out CAM performance.
| U-Net method | Dice (%) | Sensitivity | Specificity | Precision |
| Accuracy (%) |
|---|---|---|---|---|---|---|
| Without CAM | 88.61 | 97.45 | 93.12 | 93.01 | 93.21 | 93.21 |
| With CAM | 95.94 | 97.62 | 90.43 | 89.87 | 95.56 | 95.14 |
The V-Net method with/out CAM performance.
| V-Net method | Dice (%) | Sensitivity | Specificity | Precision |
| Accuracy (%) |
|---|---|---|---|---|---|---|
| Without CAM | 87.35 | 88.29 | 84.74 | 86.51 | 91.14 | 91.63 |
| With CAM | 95.75 | 96.96 | 89.77 | 89.21 | 94.91 | 94.48 |
SegChaNet method with/out CAM performance.
| SegChaNet | Dice (%) | Sensitivity | Specificity | Precision |
| Accuracy (%) |
|---|---|---|---|---|---|---|
| Without CAM | 96.81 | 93.79 | 90.19 | 92.15 | 96.89 | 96.47 |
| With CAM | 98.48 | 92.82 | 94.08 | 96.66 | 98.49 | 98.90 |
Figure 5SegChaNet, U-Net, and V-Net model with and without CAM.
Evaluation of segmentation accuracy in DSC, JI for 3 models.
| Training | Val | Testing | ||
|---|---|---|---|---|
| Models | DSC | DSC | DSC | JI |
| V-Net | 0.953 | 0.907 | 0.893 | 0.949 |
| U-Net | 0.955 | 0.893 | 0.911 | 0.956 |
| SegChaNet | 0.989 | 0.947 | 0.937 | 0.957 |
Figure 6Applied model dice coefficient.
Figure 7Marked images using grad cam.
Figure 8SegChaNet's best and worst performance scores.
Comparison of the performance of the SegChaNet segmentation model with other state-of-the-art works.
| Author (year) [reference] | Dataset | Accuracy | Qualitative analysis | Conclusion |
|---|---|---|---|---|
| Skourt, BA, et al. (2018) | MV, MI, and ME (union) auto: MV phase | 0.95 | The number of patients with nDSC 1 (within 1-millimeter uncertainty): 7. There are no discernible differences between b-spline and demons | With autocontouring, one may get sharp edges and corners. |
|
| ||||
| Wouter R. P. H., et al. (2021) | All processes are done manually. Auto: dependent on the amount of artifacts, either ME or MI | 0.76 | NA | In the main breathing phase, there is good agreement between ITV and GTV with manual contours. |
|
| ||||
| Mingjie X., et al. (2019) | Manuel: every stage auto: inherited from the MI phase | 0.95 | The number of patients that need manual adjustment | Good agreement between auto and manual contours. |
|
| ||||
| Xu, M, et al. (2019) | Manual: all phases Auto: propagate from MI phase | 0.92 | Minimal. NA | Although autocontouring is precise, it produces bigger shapes. |
|
| ||||
| Qinhua, H., et al. (2020) | Manual: every stage auto: propagated from ME phase | 0.94 | NA | Deformed contours agree well with physician-drawn contours. |
|
| ||||
| Chiu, T. W., et al. (2021) | Manual: every stage auto: propagated from ME phase | 0.63 | The propagated IGTVs were mostly within the mIGTVs | The rigid body propagation method generates ITV within a 1 mm margin of error. |
|
| ||||
| Jiang, J. et al. (2018) | Manual: ME (expert) and MI phase auto: ME phase | 0.74 | NA | The algorithm utilized generated more precise results. Results of segmentation differ from those in previously published papers. |
|
| ||||
| SegChaNet (the proposed network) | Images of 46,698 CT scans both with and without tumors from the cancer imaging archive auto: ME phase | 98.90 | The applied preprocessing steps | A combined network with two primary components with many CAM has been utilized. We do not employ any manual contours. |