| Literature DB >> 33985449 |
Abstract
BACKGROUND: The brain tumor is the growth of abnormal cells inside the brain. These cells can be grown into malignant or benign tumors. Segmentation of tumor from MRI images using image processing techniques started decades back. Image processing based brain tumor segmentation can be divided in to three categories conventional image processing methods, Machine Learning methods and Deep Learning methods. Conventional methods lacks the accuracy in segmentation due to complex spatial variation of tumor. Machine Learning methods stand as a good alternative to conventional methods. Methods like SVM, KNN, Fuzzy and a combination of either of these provide good accuracy with reasonable processing speed. The difficulty in processing the various feature extraction methods and maintain accuracy as per the medical standards still exist as a limitation for machine learning methods. In Deep Learning features are extracted automatically in various stages of the network and maintain accuracy as per the medical standards. Huge database requirement and high computational time is still poses a problem for deep learning. To overcome the limitations specified above we propose an unsupervised dual autoencoder with latent space optimization here. The model require only normal MRI images for its training thus reducing the huge tumor database requirement. With a set of normal class data, an autoencoder can reproduce the feature vector into an output layer. This trained autoencoder works well with normal data while it fails to reproduce an anomaly to the output layer. But a classical autoencoder suffer due to poor latent space optimization. The Latent space loss of classical autoencoder is reduced using an auxiliary encoder along with the feature optimization based on singular value decomposition (SVD). The patches used for training are not traditional square patches but we took both horizontal and vertical patches to keep both local and global appearance features on the training set. An Autoencoder is applied separately for learning both horizontal and vertical patches. While training a logistic sigmoid transfer function is used for both encoder and decoder parts. SGD optimizer is used for optimization with an initial learning rate of .001 and the maximum epochs used are 4000. The network is trained in MATLAB 2018a with a processor capacity of 3.7 GHz with NVIDIA GPU and 16 GB of RAM.Entities:
Keywords: Anomaly prediction; Brain tumor; Computer vision; Deep learning; MRI
Mesh:
Year: 2021 PMID: 33985449 PMCID: PMC8117624 DOI: 10.1186/s12880-021-00614-3
Source DB: PubMed Journal: BMC Med Imaging ISSN: 1471-2342 Impact factor: 1.930
Fig. 1Overview of the proposed method
Fig. 2Top row: example of normal MRI images from HCP data set. Bottom row: brain tumor images from BRATS 2015 datset
Fig. 3Top row: brain tumor images with Skull regions. Bottom row: skull removed images using active contour method
Fig. 4Plot a shows the Latent space features collected from two encoders and one auxiliary encoder. Plot b is the lower-dimensional features obtained after the SVD method
Fig. 5Example of the proposed method on BRAT 2015 Meningioma. a Original tumor image, b Dual autoencoder inference. c Segmented tumor
Fig. 6Example of the proposed method on BRAT 2015 Gliomas. a Original tumor image, b Dual autoencoder inference. c Segmented tumor
Study and comparison of the proposed method and various deep learning and machine learning methods
| Method | Patch size | Type | DSC | PPV | Sensitivity |
|---|---|---|---|---|---|
| Proposed | 16 × 64 | Meningioma | 0.84 | 0.88 | 0.89 |
| 16 × 32 | 0.83 | 0.85 | 0.87 | ||
| 16 × 16 | 0.81 | 0.82 | 0.83 | ||
| 16 × 64 | Glioma | 0.82 | 0.84 | 0.86 | |
| 16 × 32 | 0.81 | 0.825 | 0.85 | ||
| 16 × 16 | 0.78 | 0.80 | 0.81 | ||
| 0.85 | 0.86 | ||||
| CNN [ | 16 × 16 | Glioma | 0.88 | 0.89 | 0.92 |
| SVM [ | Glioma | 0.80 | 0.81 | 0.82 | |
| KNN, SVM [ | Various | 0.81 | 0.815 | 0.83 | |
| ANN [ | Various | 0.83 | 0.82 | 0.84 | |
| RescueNet [ | Gliomas | 0.94 | 0.85 | 0.88 | |
| 3D-GAN [ | Gliomas | 0.87 | 0.88 | 0.88 |
Comparison of AUC for different autoencoder architectures
| Method | AUC |
|---|---|
| Proposed | 0.995 |
| Adversarial [ | 0.994 |
| AE | 0.764 |
| VAE | 0.816 |
| VAE-H | 0.74 |
| eeVAE | 0.867 |
| ADAE | 0.892 |
| EB | 0.95 |