| Literature DB >> 35634046 |
Mubashir Ahmad1,2, Syed Furqan Qadri1, M Usman Ashraf3, Khalid Subhi4, Salabat Khan1, Syeda Shamaila Zareen5, Salman Qadri6.
Abstract
Segmentation of a liver in computed tomography (CT) images is an important step toward quantitative biomarkers for a computer-aided decision support system and precise medical diagnosis. To overcome the difficulties that come across the liver segmentation that are affected by fuzzy boundaries, stacked autoencoder (SAE) is applied to learn the most discriminative features of the liver among other tissues in abdominal images. In this paper, we propose a patch-based deep learning method for the segmentation of a liver from CT images using SAE. Unlike the traditional machine learning methods, instead of anticipating pixel by pixel learning, our algorithm utilizes the patches to learn the representations and identify the liver area. We preprocessed the whole dataset to get the enhanced images and converted each image into many overlapping patches. These patches are given as input to SAE for unsupervised feature learning. Finally, the learned features with labels of the images are fine tuned, and the classification is performed to develop the probability map in a supervised way. Experimental results demonstrate that our proposed algorithm shows satisfactory results on test images. Our method achieved a 96.47% dice similarity coefficient (DSC), which is better than other methods in the same domain.Entities:
Mesh:
Year: 2022 PMID: 35634046 PMCID: PMC9132625 DOI: 10.1155/2022/2665283
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1The workflow of the proposed liver segmentation model.
Figure 2Ambiguous borders in the raw image (Left). Contrast enhancement (Middle). Gaussian noise addition (Right).
Figure 3Extraction of positive and negative patches from CT images.
Figure 4Detection of a liver from the proposed model.
Learning parameter of stacked autoencoders.
| Parameter name | Value |
|---|---|
| Iterations in pretraining | 80 |
| Iterations in fine tuning | 3000 |
| Learning rate in pretraining and fine tuning | 0.001, 0.0001 |
| The activation function in each hidden layer | Sigmoid |
Figure 5On different parametric (a) selection, the training (b) and validation (c) results of the proposed model.
Results of the proposed model on the MICCAI-Sliver'07 dataset.
| Case | Sensitivity (%) | Specificity (%) | Accuracy (%) | Precision (%) | JSC (%) | DSC (%) |
|---|---|---|---|---|---|---|
| #1 | 97.56 | 97.78 | 97.66 | 97.96 | 95.61 | 97.75 |
| #2 | 96.77 | 96.48 | 96.63 | 96.74 | 93.71 | 96.75 |
| #3 | 96.97 | 95.24 | 96.15 | 95.78 | 92.99 | 96.15 |
| #4 | 96.59 | 93.54 | 95.24 | 94.95 | 91.86 | 96.76 |
| #5 | 95.51 | 92.87 | 94.34 | 94.42 | 90.40 | 94.95 |
| Mean | 96.68 | 95.82 | 96.00 | 95.97 | 92.91 | 96.47 |
| SD | 0.75 | 2.02 | 1.27 | 1.42 | 1.96 | 1.03 |
Figure 6Segmentation results of the proposed model. The green color indicates the original labels and the red color shows the results generated by our model.
Comparative results of the proposed model with other methods.
| Methods | DSC (%) |
|---|---|
| [ | 94.03 |
| [ | 93.00 |
| [ | 95.41 |
| [ | 94.80 |
| Our method | 96.47 |