| Literature DB >> 30651947 |
Chia-Yen Lee1, Guan-Lin Chen1, Zhong-Xuan Zhang1, Yi-Hong Chou2, Chih-Chung Hsu3.
Abstract
The sonogram is currently an effective cancer screening and diagnosis way due to the convenience and harmlessness in humans. Traditionally, lesion boundary segmentation is first adopted and then classification is conducted, to reach the judgment of benign or malignant tumor. In addition, sonograms often contain much speckle noise and intensity inhomogeneity. This study proposes a novel benign or malignant tumor classification system, which comprises intensity inhomogeneity correction and stacked denoising autoencoder (SDAE), and it is suitable for small-size dataset. A classifier is established by extracting features in the multilayer training of SDAE; automatic analysis of imaging features by the deep learning algorithm is applied on image classification, thus allowing the system to have high efficiency and robust distinguishing. In this study, two kinds of dataset (private data and public data) are used for deep learning models training. For each dataset, two groups of test images are compared: the original images and the images after intensity inhomogeneity correction, respectively. The results show that when deep learning algorithm is applied on the sonograms after intensity inhomogeneity correction, there is a significant increase of the tumor distinguishing accuracy. This study demonstrated that it is important to use preprocessing to highlight the image features and further give these features for deep learning models. In this way, the classification accuracy will be better to just use the original images for deep learning.Entities:
Mesh:
Year: 2018 PMID: 30651947 PMCID: PMC6311841 DOI: 10.1155/2018/8413403
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1The flowchart of the reconstructed error of the DAE used in this paper.
Figure 2The flowchart for deep learning.
Figure 3Malignant: (a) original image and (b) image after correction.
Figure 4Benign: (a) original image and (b) image after correction.
Figure 5The examples of original test images in the private dataset.
Figure 6The examples of test images after correction in the private dataset.
Figure 7The examples of original test images in the BUSIS dataset.
Figure 8The examples of test images after correction in the BUSIS dataset.
Classification results of the private database (the displayed value: original/corrected).
| SDAE | AlexNet | Inception v3 | ResNet | DenseNet | |
|---|---|---|---|---|---|
| TP | 8/11 | 5/6 | 6/7 | 7/7 | 6/6 |
| TN | 9/11 | 7/4 | 6/5 | 6/7 | 8/7 |
| FP | 4/2 | 7/10 | 8/9 | 8/7 | 6/7 |
| FN | 6/3 | 8/7 | 7/6 | 6/6 | 7/7 |
| Precision | 0.67/0.85 | 0.42/0.38 | 0.43/0.44 | 0.5/0.5 | 0.5/0.46 |
| Recall | 0.57/0.79 | 0.38/0.46 | 0.46/0.54 | 0.46/0.54 | 0.46/0.46 |
| Specificity | 0.69/0.85 | 0.5/0.28 | 0.43/0.36 | 0.57/0.5 | 0.57/0.5 |
| Accuracy | 0.63/0.82 | 0.44/0.37 | 0.44/0.44 | 0.51/0.52 | 0.52/0.48 |
|
| 0.62/0.85 | 0.4/0.41 | 0.44/0.48 | 0.48/0.52 | 0.48/0.46 |
Classification results of the BUSIS database (the displayed value: original/corrected).
| SDAE | AlexNet | Inception v3 | ResNet | DenseNet | |
|---|---|---|---|---|---|
| TP | 17/18 | 10/16 | 13/15 | 16/16 | 15/17 |
| TN | 13/15 | 12/14 | 14/16 | 13/10 | 14/13 |
| FP | 7/5 | 8/6 | 6/4 | 7/10 | 6/7 |
| FN | 3/2 | 10/4 | 7/5 | 4/4 | 5/3 |
| Precision | 0.71/0/78 | 0.56/0.73 | 0.68/0.79 | 0.69/0.61 | 0.71/0.71 |
| Recall | 0.85/0.9 | 0.5/0.8 | 0.65/0.75 | 0.8/0.8 | 0.75/0.85 |
| Specificity | 0.65/0.75 | 0.6/0.7 | 0.7/0.8 | 0.65/0.5 | 0.7/0.65 |
| Accuracy | 0.75/0.83 | 0.55/0.75 | 0.68/0.78 | 0.73/0.65 | 0.73/0.75 |
|
| 0.77/0.83 | 0.53/0.76 | 0.67/0.77 | 0.74/0.70 | 0.73/0.77 |