| Literature DB >> 35884460 |
Jinhee Park1,2, Hyunmo Yang3, Hyun-Jin Roh4, Woonggyu Jung3, Gil-Jin Jang1,5.
Abstract
Cervical cancer can be prevented and treated better if it is diagnosed early. Colposcopy, a way of clinically looking at the cervix region, is an efficient method for cervical cancer screening and its early detection. The cervix region segmentation significantly affects the performance of computer-aided diagnostics using a colposcopy, particularly cervical intraepithelial neoplasia (CIN) classification. However, there are few studies of cervix segmentation in colposcopy, and no studies of fully unsupervised cervix region detection without image pre- and post-processing. In this study, we propose a deep learning-based unsupervised method to identify cervix regions without pre- and post-processing. A new loss function and a novel scheduling scheme for the baseline W-Net are proposed for fully unsupervised cervix region segmentation in colposcopy. The experimental results showed that the proposed method achieved the best performance in the cervix segmentation with a Dice coefficient of 0.71 with less computational cost. The proposed method produced cervix segmentation masks with more reduction in outliers and can be applied before CIN detection or other diagnoses to improve diagnostic performance. Our results demonstrate that the proposed method not only assists medical specialists in diagnosis in practical situations but also shows the potential of an unsupervised segmentation approach in colposcopy.Entities:
Keywords: W-Net; cervical cancer screening; colposcopy; unsupervised learning; unsupervised segmentation
Year: 2022 PMID: 35884460 PMCID: PMC9317688 DOI: 10.3390/cancers14143400
Source DB: PubMed Journal: Cancers (Basel) ISSN: 2072-6694 Impact factor: 6.575
Figure 1Overview of the proposed W-Net with CT-loss.
Figure 2Proposed encoder-weighted learning scheme.
Performance comparison results of cervical ROI segmentation.
| Graphcut W-Net [ | CT-Loss W-Net | CNN-Based [ | CT-Loss W-Net + EW | |
|---|---|---|---|---|
| Number of parameters | 12.3 M | 12.3 M | 3.55 M | 3.27 M |
| Training time | 10.6 h | 4 h | 4.4 h | 3.4 h |
| Dice coefficient | 0.6120 | 0.6870 | 0.6789 | 0.7100 |
Result of significance test between the different methods.
| Compared Methods | |
|---|---|
| Graphcut W-Net vs. Proposed 1 | 9.36 × 10−21 |
| CNN-based vs. Proposed 1 | 0.86 |
| Graphcut W-Net vs. Proposed 2 | 4.91 × 10−20 |
| CNN-based vs. Proposed 2 | 5.60 × 10−6 |
Figure 3Examples of cervical ROI segmentation comparison.
Performance comparison for different in encoder-weighted learning.
| CT-Loss W-Net + EW | ||||
|---|---|---|---|---|
| EW ( | EW ( | EW ( | EW ( | |
| Dice coefficient | 0.6591 | 0.6823 | 0.7100 | 0.6908 |
| 4.28 × 10−14 | 3.08 × 10−8 | - | 1.00 × 10−4 | |