| Literature DB >> 36010903 |
Vivek Kumar Singh1, Md Mostafa Kamal Sarker2, Yasmine Makhlouf1, Stephanie G Craig1, Matthew P Humphries1,3,4, Maurice B Loughrey1,5, Jacqueline A James1,6,7, Manuel Salto-Tellez1,6,8, Paul O'Reilly1,9, Perry Maxwell1.
Abstract
In this article, we propose ICOSeg, a lightweight deep learning model that accurately segments the immune-checkpoint biomarker, Inducible T-cell COStimulator (ICOS) protein in colon cancer from immunohistochemistry (IHC) slide patches. The proposed model relies on the MobileViT network that includes two main components: convolutional neural network (CNN) layers for extracting spatial features; and a transformer block for capturing a global feature representation from IHC patch images. The ICOSeg uses an encoder and decoder sub-network. The encoder extracts the positive cell's salient features (i.e., shape, texture, intensity, and margin), and the decoder reconstructs important features into segmentation maps. To improve the model generalization capabilities, we adopted a channel attention mechanism that added to the bottleneck of the encoder layer. This approach highlighted the most relevant cell structures by discriminating between the targeted cell and background tissues. We performed extensive experiments on our in-house dataset. The experimental results confirm that the proposed model achieves more significant results against state-of-the-art methods, together with an 8× reduction in parameters.Entities:
Keywords: ICOS; channel attention; colon cancer; deep learning; immunohistochemistry
Year: 2022 PMID: 36010903 PMCID: PMC9406218 DOI: 10.3390/cancers14163910
Source DB: PubMed Journal: Cancers (Basel) ISSN: 2072-6694 Impact factor: 6.575
Figure 1Examples of IHC patches extracted at 40× magnification containing ICOS positive (i.e., brown cytoplasmic and blue nuclear staining) and ICOS negative (blue nuclear stain only) cells in colon cancer.
Figure 2Overview of proposed segmentation model. The input IHC patch image feed to the encoder layer that extracts the spatial and global feature information. The decoder reconstructs the features into the segmentation map. Where C refers to the feature concatenation. The dash (- -) line corresponds to the skip connection between each encoder and decoder layers.
Examining the effect of adopted attention block on segmentation result (mean ± standard deviation) incorporated with Baseline model. The best results are highlighted in bold.
| Model | Metrics | Para (M) | FPS | |||
|---|---|---|---|---|---|---|
| Dice | AJI | Sensitivity | Specificity | |||
| Baseline | 75.09 ± 13.52 | 59.67 ± 12.90 | 81.34 ± 15.99 | 99.54 ± 0.43 | 8.1 | 161 |
|
|
|
| ||||
Ablation study of the loss functions. The best results are highlighted in bold.
| Model | Metrics | Para (M) | FPS | |||
|---|---|---|---|---|---|---|
| Dice | AJI | Sensitivity | Specificity | |||
|
|
|
|
|
| 8.1 | 154 |
|
|
|
|
|
| 8.1 | 154 |
|
|
|
|
|
|
| |
Segmentation results (mean ± standard deviation) of the proposed model compared with six state-of-the-art methods. The best results are highlighted in bold.
| Model | Metrics | Params (M) | FPS | |||
|---|---|---|---|---|---|---|
| Dice | AJI | Sensitivity | Specificity | |||
| U-Net [ |
|
|
|
| 34.5 | 116 |
| FCN [ |
|
|
|
| 135.53 | 57 |
| Attention U-Net [ |
|
|
|
| 34.87 | 106 |
| DeepLabv3+ [ |
|
|
|
| 59.33 | 103 |
| U-Net++ [ |
|
|
|
| 9.61 | 168 |
| Efficient-UNet [ | 73.44 | 58.98 | 81.81 | 99.30 | 66.0 | 83 |
|
|
|
|
|
| ||
Figure 3Boxplots of Dice and AJI scores on test dataset. Multiple color boxes highlight the score range of different segmentation methods; the black line inside each box shows the median value, box limits incorporate interquartile ranges Q2 and Q3 (from 25% to 75% of samples), upper and lower whiskers are measured as 1.5× the distance of upper and lower limits of the box, and all values outside the whiskers are assumed outliers.
Figure 4Illustration of proposed model segmentation result. Color maps shows the following description: orange/yellow (true positive), green (false positive), and red (false negative).
Figure 5Illustration of proposed model segmentation results compare with five state-of-the-art methods. Color maps shows the following description: orange/yellow (true positive), green (false positive), and red (false negative).
Figure 6Boxplot of total cell identified by ICOSeg compared to actual cells of the ground-truth.
Figure 7Illustration of four examples shows the limitation of ICOSeg failed to accurately identify and segment the cells boundaries. The red refers to the ground-truth, green corresponds to the ICOSeg prediction and orange/yellow represents the overlap of cell boundary both are common in ground-truth and prediction. The four different examples (a–d) refers to the positive cells in brown.