| Literature DB >> 35954370 |
Muthu Subash Kavitha1, Prakash Gangadaran2,3, Aurelia Jackson4, Balu Alagar Venmathi Maran4, Takio Kurita5, Byeong-Cheol Ahn2,3.
Abstract
Early detection of colorectal cancer can significantly facilitate clinicians' decision-making and reduce their workload. This can be achieved using automatic systems with endoscopic and histological images. Recently, the success of deep learning has motivated the development of image- and video-based polyp identification and segmentation. Currently, most diagnostic colonoscopy rooms utilize artificial intelligence methods that are considered to perform well in predicting invasive cancer. Convolutional neural network-based architectures, together with image patches and preprocesses are often widely used. Furthermore, learning transfer and end-to-end learning techniques have been adopted for detection and localization tasks, which improve accuracy and reduce user dependence with limited datasets. However, explainable deep networks that provide transparency, interpretability, reliability, and fairness in clinical diagnostics are preferred. In this review, we summarize the latest advances in such models, with or without transparency, for the prediction of colorectal cancer and also address the knowledge gap in the upcoming technology.Entities:
Keywords: artificial intelligence; colorectal cancer; interpretation; neural network; transfer learning; transparency
Year: 2022 PMID: 35954370 PMCID: PMC9367621 DOI: 10.3390/cancers14153707
Source DB: PubMed Journal: Cancers (Basel) ISSN: 2072-6694 Impact factor: 6.575
Figure 1An overview of deep learning models in colon cancer detection and diagnosis. Created with BioRender.com (accessed on 1 July 2022).
Figure 2An overview of transfer learning model. Created with BioRender.com (accessed on 1 July 2022).
General summary of studies related to explainable artificial intelligence and sampling methods using deep learning techniques for colon cancer detection.
| References | Methods | Imaging Modality |
|---|---|---|
| Yao et al., 2020 [ | Deep Attention Multiple Instance Survival Learning (DeepAttnMISL) with Multiple Instance Fully Convolutional Network (MI-FCN), DeepMISL, Finetuned-WSISA-LassoCOx, Finetuned-WSISA-MTLSA, WSISA-LassoCox, WSISA-MTLSA | Sample images of tissues taken through colonoscopy and turned into WSI clusters. |
| Sirinukunwattana et al., 2016 [ | Spatially Constrained Convolutional Neural Network (SC-CNN) | 100 H&E-stained histology images of colorectal adenocarcinomas of 500 × 500 pixels were cropped from WSIs using Omnyx VLI20 scanner. |
| Sabol et al., 2020 [ | Cumulative Fuzzy Class Membership Criterion (CFCMC) | H&E tissue slides cut into 5000 small sections of 150 × 150 pixels and annotation as one of eight tissue classes. |
| Korbar et al., 2017 [ | ResNet | 176 H&E-stained WSIs collected from patients who underwent colorectal cancer screening. |
| Hägele et al., 2020 [ | GoogLeNet from Caffe Model Zoo | H&E-stained images from TCGA Research Network were chosen and its WSIs were annotated. |
| Koziarski et al., 2020 [ | MobileNet | Colorectal cancer histology image datasets divided into 5000 different types of tissues, each with a 150 × 150 pixels dimensionality. |
| Kainz et al., 2017 [ | Object-Net and Separator-Net | 165 annotated, H&E-stained images of colorectal adenocarcinomas were collected at 20× magnification using a Zeiss MIRAX MIDI Scanner. |
| Hong et al., 2020 [ | U-Net with EfficientNet B4 and EfficientNet B5 encoder | Datasets obtained from ETIS-Larib from the MICCAI 2015 polyp detection challenge, and CVCColonDB |
| Shapcott et al., 2019 [ | Matconvnet, Tensorflow “cifar10” with Pycharm IDE | 142 H&E-stained, 40× magnification colorectal cancer images obtained from TCGA COAD data set from the Genomic Data Commons Portal 2018. |