| Literature DB >> 34671677 |
Dong Sui1, Kang Zhang1, Weifeng Liu1, Jing Chen2, Xiaoxuan Ma1, Zhaofeng Tian2.
Abstract
Colorectal cancer is a high death rate cancer until now; from the clinical view, the diagnosis of the tumour region is critical for the doctors. But with data accumulation, this task takes lots of time and labor with large variances between different doctors. With the development of computer vision, detection and segmentation of the colorectal cancer region from CT or MRI image series are a great challenge in the past decades, and there still have great demands on automatic diagnosis. In this paper, we proposed a novel transfer learning protocol, called CST, that is, a union framework for colorectal cancer region detection and segmentation task based on the transformer model, which effectively constructs the cancer region detection and its segmentation jointly. To make a higher detection accuracy, we incorporate an autoencoder-based image-level decision approach that leverages the image-level decision of a cancer slice. We also compared our framework with one-stage and two-stage object detection methods; the results show that our proposed method achieves better results on detection and segmentation tasks. And this proposed framework will give another pathway for colorectal cancer screen by way of artificial intelligence.Entities:
Mesh:
Year: 2021 PMID: 34671677 PMCID: PMC8523251 DOI: 10.1155/2021/6207964
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.411
Figure 1Examples of colorectal cancer in MRI images. (a1–a4, c1–c4) Are the original image slices from the MRI DICOM series; (b1–b4, d1–d4) are the tumour region mask labelled by doctors.
Figure 2Schematic diagram of the proposed multitask learning framework for colorectal cancer region mining frame.
Figure 3Tumour region detection results and its comparison results.
Figure 4Basic rocket ship design. The rocket ship is propelled with three thrusters and features a single viewing window. The nose cone is detachable upon impact.
Tumour region detection results covered by the study.
| Methods | CRM+ (%) | CRM- (%) | Average (%) | Total (%) |
|---|---|---|---|---|
| Faster-RCNN | 67.1 | 62.3 | 64.7 | 65.6 |
| Yolo-v3 | 43.4 | 37.6 | 40.5 | 41.2 |
| Ours | 87.5 | 89.1 | 88.3 | 88.6 |
Tumour region segmentation accuracy covered by the study.
| Methods | CRM+ (%) | CRM- (%) | Average (%) | Total (%) |
|---|---|---|---|---|
| U-Net | 82.1 | 81.7 | 81.9 | 81.8 |
| FCN | 67.2 | 66.4 | 66.8 | 66.5 |
| Ours | 91.2 | 90.6 | 90.9 | 91.1 |