| Literature DB >> 35328166 |
Yen-Po Wang1,2,3,4, Ying-Chun Jheng1,4,5, Kuang-Yi Sung1,2,4, Hung-En Lin1,2,4, I-Fang Hsin1,2,4, Ping-Hsien Chen1,2,4, Yuan-Chia Chu6,7,8, David Lu1, Yuan-Jen Wang4,9, Ming-Chih Hou1,2,4, Fa-Yauh Lee2,4, Ching-Liang Lu1,2,3,4.
Abstract
BACKGROUND: Adequate bowel cleansing is important for colonoscopy performance evaluation. Current bowel cleansing evaluation scales are subjective, with a wide variation in consistency among physicians and low reported rates of accuracy. We aim to use machine learning to develop a fully automatic segmentation method for the objective evaluation of the adequacy of colon preparation.Entities:
Keywords: U-NET; artificial intelligence; automated segmentation; colonoscopy; colonoscopy preparation quality
Year: 2022 PMID: 35328166 PMCID: PMC8947406 DOI: 10.3390/diagnostics12030613
Source DB: PubMed Journal: Diagnostics (Basel) ISSN: 2075-4418
Figure 1The manual segmentation samples. The figure represents the different types of fecal residues that were annotated and applied in this study.
Figure 2The architecture of U-Net. U-Net contained 2 parts: encoder and decoder. Initially, the input image included features extracted by the encoder, and those features were transmitted to the decoder as the important information for identifying whether each pixel was the target location. The red line and green line indicate the encoder and decoder, respectively, in the U-Net AI model.
Comparison of accuracy using U-Net with different encoders.
| Model | Top 1 Accuracy (%) | Top 5 Accuracy (%) | Parameters (M) |
|---|---|---|---|
| VGG19 | 71.1 | 89.8 | 143 |
| Resnet34 | 73.31 | 91.4 | 26 |
| ResNet50+SE | 76.86 | 93.3 | 28 |
| ResNeXt50 | 77.15 | 94.25 | 25 |
| SENet-154 | 82.7 | 96.2 | 145.8 |
| Inception V3 | 78 | 93.9 | 23.8 |
| DenseNet121 | 74.5 | 91.8 | 8 |
| MobileNet_v2 | 74.9 | 92.5 | 6 |
| EfficientNet-B5 | 83.3 | 96.7 | 30 |
Figure 3The major parameters in this study. The confusion matrix contained 4 parameters. The yellow area (true positive, TP) represents the intersection area of ground truth and the AI-predicted area. The union of red (false negative, FN) and yellow (true positive, TP) indicate the ground truth area. The blue area (false positive, FP) and yellow area (true positive, TP) indicate the AI-predicted area. The rest of the area out of the union of ground truth and the AI-predicted area was the true negative (TN).
The detailed parameters for training the models.
| Model | U-Net |
|---|---|
| Backbone | EfficientNet-B5 |
| Optimizer | Adam |
| Loss function | binary cross entropy |
| Learning rate | 1e-4 |
| Batch size | 4 |
| Total number of epochs run during training | 30 |
The detailed performance of the final trained models.
| Mean | S.E.M. | |
|---|---|---|
| IOU | 0.607 | 0.17 |
| Accuracy | 0.947 | 0.0067 |
| Prediction | 0.131 | 0.0038 |
| Ground truth | 0.148 | 0.0043 |
| Intersection area | 0.113 | 0.0036 |
| Nonunion area | 0.834 | 0.0045 |
IOU = Intersection over union.
Figure 4The better annotation example of AI model segmentation. The intersection over union (IOU) of those samples achieved approximately 0.90, meaning that the annotation result of the AI was similar to manual labeling. In those figures, the left, middle, and right columns represent the raw, manually annotated, and AI-annotated images, respectively. The green and blue lines indicate the segmentation labeled by endoscopy technicians and the trained AI model.
Figure 5The worse annotation samples of the AI model segment. In each image, the left, middle, and right columns represent the raw, manually annotated, and AI-annotated images, respectively. The green and blue lines indicate the segmentation labeled by endoscopy technicians and the trained AI model. The IOU of these samples was less than 0.5.
Figure 6Scatterplots show a comparison of the area produced from manual and automatic segmentation methods.