| Literature DB >> 36052054 |
S K B Sangeetha1, V Muthukumaran2, K Deeba3, Hariharan Rajadurai4, V Maheshwari5, Gemmachis Teshite Dalu6.
Abstract
The difficulty or cost of obtaining data or labels in applications like medical imaging has progressed less quickly. If deep learning techniques can be implemented reliably, automated workflows and more sophisticated analysis may be possible in previously unexplored areas of medical imaging. In addition, numerous characteristics of medical images, such as their high resolution, three-dimensional nature, and anatomical detail across multiple size scales, can increase the complexity of their analysis. This study employs multiconvolutional transfer learning (MCTL) for applying deep learning to small medical imaging datasets in an effort to address these issues. Multiconvolutional transfer learning is a model based on transfer learning that enables deep learning with small datasets. In order to learn new features on a smaller target dataset, an initial baseline is used in the transfer learning process. In this study, 3D MRI images of brain tumors are classified using a convolutional autoencoder method. In order to use unenhanced Magnetic Resonance Imaging (MRI) for clinical diagnosis, expensive and invasive contrast-enhancing procedures must be performed. MCTL has been shown to increase accuracy by 1.5%, indicating that small targets are more easily detected with MCTL. This research can be applied to a wide range of medical imaging and diagnostic procedures, including improving the accuracy of brain tumor severity diagnosis through the use of MRI.Entities:
Mesh:
Year: 2022 PMID: 36052054 PMCID: PMC9427231 DOI: 10.1155/2022/8722476
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Multi-convolutional transfer learning (MCTL) architecture.
Proposed hyperparameter combination.
| Convolutional layers | Filters | Nodes in fully connected layer |
|
| ||
| 1 | 16 | 16, 32, 64 |
| 2 | 32 | 16, 32,64 |
| 3 | 64 | 16, 32, 64 |
Figure 23D brain tumor MRI images of the REMBRANDT dataset.
Figure 3Autoencoder- output predictions.
Proposed convolution layers performance.
| No of convolution layers | Filters | Accuracy (%) | Sensitivity (%) | Specificity (%) |
|---|---|---|---|---|
| 1 | 16 | 83–90 | 85–91 | 60–72 |
| 2 | 32 | 84–92 | 80–87 | 62–68 |
| 3 | 64 | 86–94 | 78–85 | 61–73 |
Performance metrics - comparison analysis.
| Reference | Method | Accuracy | Sensitivity | Specificity |
|---|---|---|---|---|
| [ | Support vector machine (SVM) | 87–89% | 78.5–81% | 62–64.5% |
| [ | K nearest neighbor (KNN) | 85–86.2% | 81.1–82.6% | 63.6–67.2% |
| [ | Naive bayes | 78.2–83.4% | 84.8–85% | 62–65% |
| [ | Deep neural network (DNN) | 89.3–91% | 83–85$ | 64–69.8% |
| [ | Convolutional neural network (CNN) | 91.3–92.7% | 82.2–84% | 68–69% |
| Proposed MCTL | 93.2–94% | 78–85% | 61–73% |
Figure 4Accuracy.
Figure 5Sensitivity.
Figure 6Specificity.
Figure 7SVM.
Figure 8KNN.
Figure 9Naive bayes.
Figure 10DNN.
Figure 11CNN.
Figure 12MCTL.