| Literature DB >> 35592683 |
Deepika Saravagi1, Shweta Agrawal2, Manisha Saravagi3, Md Habibur Rahman4.
Abstract
Convolutional neural network (CNN) models have made tremendous progress in the medical domain in recent years. The application of the CNN model is restricted due to a huge number of redundant and unnecessary parameters. In this paper, the weight and unit pruning strategy are used to reduce the complexity of the CNN model so that it can be used on small devices for the diagnosis of lumbar spondylolisthesis. Experimental results reveal that by removing 90% of network load, the unit pruning strategy outperforms weight pruning while achieving 94.12% accuracy. Thus, only 30% (around 850532 out of 3955102) and 10% (around 251512 out of 3955102) of the parameters from each layer contribute to the outcome during weight and neuron pruning, respectively. The proposed pruned model had achieved higher accuracy as compared to the prior model suggested for lumbar spondylolisthesis diagnosis.Entities:
Mesh:
Year: 2022 PMID: 35592683 PMCID: PMC9113885 DOI: 10.1155/2022/2722315
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.809
Figure 1Lumbar spine X-ray radiographs.
Summary of literature review.
| Source | Purpose | Major findings | Accuracy (%) |
|---|---|---|---|
| Azar et al. [ | Detection of vertebral column pathology | A decision support tool is proposed for the detection of pathology on the vertebral column using three types of decision trees classifiers. | SDT: 81.94 |
|
| |||
| Indriana et al. [ | Classification of vertebral column disease | Ensembled decision tree (J48) and bagging are used as the classification model. | Ensemble model: 85% |
|
| |||
| Choudhary et al. [ | IDC classification | Pruned models performed superior over original pretrained models. | 92.07 |
|
| |||
| Samala et al. [ | Breast cancer diagnosis | To compress a deep convolutional neural network for mass classification in digital breast tomosynthesis (DBT), a layered pathway evolution strategy is presented. | — |
|
| |||
| Hu et al. [ | Network trimming | The zero activation neurons are unnecessary and may be eliminated without impacting the network's overall accuracy. | 90.278 |
|
| |||
| Hajabdollahi et al. [ | Retinal disease screening and diagnosis | A simplification approach is proposed for CNNs based on the combination of quantization and pruning. | 76 |
|
| |||
| Hajabdollahi et al. [ | Detection and analysis of diabetic retinopathy | To simplify the network topology, hierarchical pruning gradually removes connections, filter channels, and filters. | 92 |
|
| |||
| Mantzaris et al. [ | Medical disease prediction | This research uses a genetic algorithm (GA) to prune probabilistic neural networks. | 85.5 |
|
| |||
| Yin et al. [ | Diabetes diagnosis | DiabDeep is a paradigm for widespread diabetes detection that blends efficient neural networks (named DiabNNs) with off-the-shelf WMSs. | 94 |
|
| |||
| Chen and Zhao [ | Reducing complex CNNs | The pruning procedure is conducted at the layer level, and redundant parameters were discovered by studying the features learned in the convolutional layers. | 93.03 |
|
| |||
| Han et al. [ | Deep compression | A three-stage pipeline approach (pruning, trained quantization, and Huffman coding) is introduced to reduce the storage requirement of neural networks. | — |
|
| |||
| Li et al. [ | Pruning and compressing | Filters from CNNs that have been recognized as having a little impact on output accuracy have been pruned. | — |
|
| |||
| Horry et al. [ | Lung cancer diagnosis | An improved generalization can be achieved with an image preprocessing pipeline that homogenizes and debases chest X-ray images and helps to develop a low-cost, accessible DL system for lung cancer screening. | 89 |
|
| |||
| Xiang et al. [ | Skin disease diagnosis | Without changing the model size, the performance of the model is improved after fine-tuning. | 83.5 |
Figure 2CNN model for spondylolisthesis problem.
Figure 3Pruning approaches used in the study.
Figure 4Block diagram of proposed CNN model.
Allocation of tensor in proposed CNN Model.
| Layer (type) | Output shape | Param No. |
|---|---|---|
| Dense_1 (Dense) | (None, 1000) | 2353000 |
| Dense_2 (Dense) | (None, 1000) | 1001000 |
| Dense_3 (Dense) | (None, 500) | 500500 |
| Dense_4 (Dense) | (None, 200) | 100200 |
| Dense_5 (Dense) | (None, 2) | 402 |
|
| ||
| Total params: 3955102 | ||
Figure 5Flowchart of implemented CNN pruning process.
Dataset description.
| Test cases | 337 |
|---|---|
| Normal | 181 |
| Spondylolisthesis | 156 |
| Image type | X-ray images in .jpg |
Dataset statistics.
| Test cases | Train set | Test set | Total |
|---|---|---|---|
| Normal | 154 | 27 | 181 |
| Spondylolisthesis | 133 | 23 | 156 |
|
| |||
| Total | 287 | 50 | 337 |
Hyperparameters used while model training.
| Hyperparameter | Value |
|---|---|
| Loss function | categorical_crossentropy |
| Optimizer | Adam |
| Learning rate | 0.001 |
| Number of epochs | 20 |
| Steps per epoch | 20 |
| Batch size | 16 |
Figure 6CNN model's accuracy/loss graph. (a) CNN model's training accuracy. (b) CNN model's training loss.
Performance of sparse model after applying sparsity.
| Index |
| model_weight | loss_weight | acc_weight | loss_unit | acc_unit |
|---|---|---|---|---|---|---|
| 0 | 0.00 | 3955102 | 0.7973 | 0.9412 | 0.7973 | 0.9412 |
| 1 | 0.25 | 2666327 | 0.7675 | 0.9412 | 0.7751 | 0.9412 |
| 2 | 0.50 | 1577552 | 0.7367 | 0.9412 | 0.5957 | 0.9412 |
| 3 | 0.60 | 1198042 | 0.6987 | 0.9412 | 0.3913 | 0.9412 |
| 4 | 0.70 | 850532 | 0.5793 | 0.9412 | 0.3210 | 0.9412 |
| 5 | 0.80 | 535022 | 0.5641 | 0.8824 | 0.1988 | 0.9412 |
| 6 | 0.90 | 251512 | 0.4872 | 0.8824 | 0.3407 | 0.9412 |
| 7 | 0.95 | 121757 | 0.3813 | 0.8824 | 0.4970 | 0.7647 |
| 8 | 0.97 | 72095 | 0.3723 | 0.8235 | 0.6278 | 0.8235 |
| 9 | 0.99 | 23713 | 0.3452 | 0.8235 | 0.6467 | 0.7647 |
Figure 7Sparse model's accuracy/loss graph.