| Literature DB >> 35875199 |
Tejalal Choudhary1, Shubham Gujar2, Anurag Goswami1, Vipul Mishra1, Tapas Badal1.
Abstract
COVID-19 has become a pandemic for the entire world, and it has significantly affected the world economy. The importance of early detection and treatment of the infection cannot be overstated. The traditional diagnosis techniques take more time in detecting the infection. Although, numerous deep learning-based automated solutions have recently been developed in this regard, nevertheless, the limitation of computational and battery power in resource-constrained devices makes it difficult to deploy trained models for real-time inference. In this paper, to detect the presence of COVID-19 in CT-scan images, an important weights-only transfer learning method has been proposed for devices with limited runt-time resources. In the proposed method, the pre-trained models are made point-of-care devices friendly by pruning less important weight parameters of the model. The experiments were performed on two popular VGG16 and ResNet34 models and the empirical results showed that pruned ResNet34 model achieved 95.47% accuracy, 0.9216 sensitivity, 0.9567 F-score, and 0.9942 specificity with 41.96% fewer FLOPs and 20.64% fewer weight parameters on the SARS-CoV-2 CT-scan dataset. The results of our experiments showed that the proposed method significantly reduces the run-time resource requirements of the computationally intensive models and makes them ready to be utilized on the point-of-care devices.Entities:
Keywords: Automated diagnosis; COVID-19; Convolutional neural network; Deep learning; Pruning
Year: 2022 PMID: 35875199 PMCID: PMC9289654 DOI: 10.1007/s10489-022-03893-7
Source DB: PubMed Journal: Appl Intell (Dordr) ISSN: 0924-669X Impact factor: 5.019
Fig. 1Proposed important weights-only transfer learning approach
Fig. 2COVID-19 positive (bottom row) and negative (top row) images from the dataset
Normal and infected images from the SARS-CoV-2 CT-scan dataset
| Class | Training | Validation | Testing | Total Images |
|---|---|---|---|---|
| Normal | 836 | 208 | 186 | 1,230 |
| Infected | 851 | 212 | 189 | 1,252 |
| Total | 1,687 | 420 | 375 | 2,482 |
Different performance measures for the ResNet34 and VGG16 on the test data (original pre-trained models)
| Model trained | Augmentation | Accuracy | Precision | Recall | F1-score | Specificity | ROC-AUC |
|---|---|---|---|---|---|---|---|
| ResNet34 | No | 95.73 | 0.9471 | 0.9676 | 0.9572 | 0.9474 | 0.9931 |
| ResNet34 | Yes | 97.87 | 0.9947 | 0.9641 | 0.9792 | 0.9944 | 0.9967 |
| VGG16, dense | No | 90.13 | 0.9206 | 0.8878 | 0.9039 | 0.9162 | 0.9667 |
| VGG16, dense | Yes | 89.87 | 0.9471 | 0.8647 | 0.9040 | 0.9405 | 0.9797 |
| VGG16, all | No | 90.40 | 0.9153 | 0.8964 | 0.9058 | 0.9121 | 0.9660 |
| VGG, all | Yes | 96.53 | 0.9630 | 0.9681 | 0.9655 | 0.9626 | 0.9957 |
Different performance measures for the ResNet34 and VGG16 on the test data (pruned models)
| Model trained | Augmentation | Accuracy | Precision | Recall | F1-score | Specificity | ROC-AUC |
|---|---|---|---|---|---|---|---|
| ResNet34 | No | 94.93 | 0.9312 | 0.9670 | 0.9488 | 0.9326 | 0.9888 |
| ResNet34 | Yes | 95.47 | 0.9947 | 0.9216 | 0.9567 | 0.9942 | 0.9974 |
| VGG16, dense | No | 89.33 | 0.8730 | 0.9116 | 0.8919 | 0.8763 | 0.9669 |
| VGG16, dense | Yes | 89.33 | 0.8889 | 0.8984 | 0.8936 | 0.8883 | 0.9698 |
| VG16, all | No | 93.07 | 0.9418 | 0.9223 | 0.9319 | 0.9396 | 0.9744 |
| VG16, all | Yes | 92.80 | 0.9630 | 0.9010 | 0.9309 | 0.9595 | 0.9878 |
Fig. 3The confusion matrix, precision-recall curve, and ROC curve for the VGG16 original (left) and pruned (right) model
Fig. 4The confusion matrix, precision-recall curve, and ROC curve for the ResNet34 original (left) and pruned (right) model
Pruned and original model comparison, Augmentation = Aug, Million = M, Billion = B
| Original | Pruned | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|
| Model | Aug. | Para (M) | FLOP (B) | Acc. | Para (M) | FLOP (B) | Acc. | %Para | %FLOP | Acc(±) |
| ResNet34 | No | 21.28 | 3.67 | 95.73 | 16.89 | 2.13 | 94.93 | 20.64 | 41.96 | -0.80 |
| ResNet34 | Yes | 21.28 | 3.67 | 97.87 | 16.89 | 2.13 | 95.47 | 20.64 | 41.96 | -2.40 |
| VGG16, dense | No | 134.26 | 15.49 | 90.13 | 78.33 | 3.49 | 89.33 | 41.66 | 77.47 | -0.80 |
| VGG16, dense | Yes | 134.26 | 15.49 | 89.87 | 78.33 | 3.49 | 89.33 | 41.66 | 77.47 | -0.53 |
| VG16, all | No | 134.26 | 15.49 | 90.40 | 78.33 | 3.49 | 93.07 | 41.66 | 77.47 | 2.67 |
| VG16, all | Yes | 134.26 | 15.49 | 96.53 | 78.33 | 3.49 | 92.80 | 41.66 | 77.47 | -3.73 |
Inference time (seconds), #parameters, FLOPs, and filters of the models
| Original model | Pruned model | |||
|---|---|---|---|---|
| Metric | VGG16 | ResNet34 | VGG16 | ResNet34 |
| Parameters (M) | 134.26 | 21.28 | 78.33 | 16.89 |
| Parameter reduction (%) | 0 | 0 | 41.66 | 20.64 |
| FLOPs (B) | 15.49 | 3.67 | 3.49 | 2.13 |
| FLOPs reduction (%) | 0 | 0 | 77.47 | 41.96 |
| Convolutional filters | 4224 | 8512 | 2073 | 7362 |
| Convolutional filter reduction (%) | 0 | 0 | 50.92 | 13.51 |
| GPU inference-time, single image (s) | 0.005219 | 0.004637 | 0.004154 | 0.004250 |
| GPU inference-time, test set (s) | 1.957030 | 1.738801 | 1.557787 | 1.593882 |
| CPU inference-time, single image (s) | 0.158153 | 0.058811 | 0.056285 | 0.040023 |
| CPU inference-time, test set (s) | 59.307411 | 22.054049 | 21.106688 | 15.008648 |
Fig. 5Inference time (CPU and GPU) for single image
Fig. 6Inference time (CPU and GPU) on the entire test set for all the models
VGG16 model complexity analysis
| Convolutional block and layers | Before pruning | After pruning | Reduction | ||||
|---|---|---|---|---|---|---|---|
| Block | Filter, Layer | Parameters | FLOPs | Parameters | FLOPs | %Para red | %FLOP red |
| Conv block 1 | 64, 2 | 38720 | 1952448512 | 8374 | 424589312 | 78.37 | 78.25 |
| Conv block 2 | 128, 2 | 221440 | 2782560256 | 40822 | 514002944 | 81.57 | 81.53 |
| Conv block 3 | 256, 3 | 1475328 | 4629839872 | 348173 | 1093413440 | 76.40 | 76.38 |
| Conv block 4 | 512, 3 | 5899776 | 4626628608 | 1306313 | 1024720144 | 77.86 | 77.85 |
| Conv block 5 | 512, 3 | 7079424 | 1388269568 | 1837569 | 360513188 | 74.04 | 74.03 |
| FC1 | 4,096 (neuron) | 102764544 | 102764544 | 58007552 | 58007552 | 43.55 | 43.55 |
| FC2 | 4,096 (neuron) | 16781312 | 16781312 | 16781312 | 16781312 | 0.00 | 0.00 |
| FC3 | 2 (neuron) | 8194 | 8194 | 8194 | 8194 | 0.00 | 0.00 |
| Total | 134.26M | 15.49B | 78.33M | 3.49B | 41.66 | 77.47 | |
Comparison of the proposed important weights only approach with other methods on the SARS-CoV-2 dataset
| Method | Accuracy | Sensitivity | F1-score | Specificity | FLOP(%) | Para.(%) |
|---|---|---|---|---|---|---|
| [ | 91.5 | 0.915 | 0.915 | - | ||
| [ | 89.31 | 0.8240 | 0.8860 | 0.9634 | ||
| [ | 89.92 | 0.8680 | 0.8967 | 0.9309 | ||
| [ | 95.45 | 0.9523 | 0.9549 | 0.9567 | ||
| [ | 96.25 | 0.9629 | 0.9629 | 0.9621 | ||
| [ | 91.73 | 0.9350 | 0.9182 | - | ||
| [ | 94.96 | 0.9715 | 0.9503 | - | ||
| [ | 95.16 | 0.9671 | 0.9514 | - | ||
| [ | 90.83 | 0.8589 | 0.9087 | - | ||
| [ | 92 | 0.95 | 0.89 | - | ||
| [ | 94 | 0.98 | 0.94 | - | ||
| [ | 95.61 | - | - | - | ||
The bold text shows our research result