| Literature DB >> 36051480 |
J Nirmaladevi1, M Vidhyalakshmi2, E Bijolin Edwin3, N Venkateswaran4, Vinay Avasthi5, Abdullah A Alarfaj6, Abdurahman Hajinur Hirad6, R K Rajendran7, TegegneAyalew Hailu8.
Abstract
As an epidemic, COVID-19's core test instrument still has serious flaws. To improve the present condition, all capabilities and tools available in this field are being used to combat the pandemic. Because of the contagious characteristics of the unique coronavirus (COVID-19) infection, an overwhelming comparison with patients queues up for pulmonary X-rays, overloading physicians and radiology and significantly impacting the quality of care, diagnosis, and outbreak prevention. Given the scarcity of clinical services such as intensive care and motorized ventilation systems in the aspect of this vastly transmissible ailment, it is critical to categorize patients as per their risk categories. This research describes a novel use of the deep convolutional neural network (CNN) technique to COVID-19 illness assessment seriousness. Utilizing chest X-ray images as contribution, an unsupervised DCNN model is constructed and suggested to split COVID-19 individuals into four seriousness classrooms: low, medium, serious, and crucial with an accuracy level of 96 percent. The efficiency of the DCNN model developed with the proposed methodology is demonstrated by empirical findings on a suitably huge sum of chest X-ray scans. To the evidence relating, it is the first COVID-19 disease incidence evaluation research with four different phases, to use a reasonably high number of X-ray images dataset and a DCNN with nearly all hyperparameters dynamically adjusted by the variable selection optimization task.Entities:
Mesh:
Year: 2022 PMID: 36051480 PMCID: PMC9427302 DOI: 10.1155/2022/1289221
Source DB: PubMed Journal: Biomed Res Int Impact factor: 3.246
Figure 1Design model on CNN.
Figure 2Flow chart of DCNN.
Figure 3Prediction design.
Figure 4Classification based on parameter.
Proposed deep CNN.
| Deep CNN | Type of layer | Activation layer | Training parameter | Overall trainers |
|---|---|---|---|---|
| 226 × 226 × 3 | I/P | 226 × 226 × 3 | ∗ | — |
| 1285 × 5 × 3 | Convolution | 58 × 58 × 126 | Weights and bias | 13,954 |
| Rectified unit | Rectified unit | 58 × 58 × 126 | ∗ | — |
| Cross normalization | Normalization | 58 × 58 × 126 | ∗ | — |
| Maximum pooling | Max pooling | 26 × 26 × 126 | ∗ | — |
| Fully connected | FC | 1 × 1 × 516 | Weight | 3,146,245 |
| 35% dropout | Dropout | 1 × 1 × 514 | ∗ | — |
| Fully connected layer | FC | 1 × 1 × 6 | Weight | 2054 |
| Softmax | Softmax | 1 × 1 × 6 | ∗ | — |
| O/P | Categorization | ∗ | ∗ | — |
Proposed deep CNN.
| Categorization of group | Total number of images | ||||
|---|---|---|---|---|---|
| Specific group | Overall | Training data | Validation data | Testing data | |
| Server | 715 | 3050 | 1850 | 600 | 600 |
| Crucial | 550 | ||||
| Moderate | 940 | ||||
| Mild | 845 | ||||
COVID-19 avoidance framework.
| Serial no | Hygiene on respiratory | Physical communication | Illness | Circumvention category |
|---|---|---|---|---|
| 1 | Uninfected | Uninfected | Uninfected | Safe |
| 2 | Uninfected | Infected | Infected | Keep space |
| 3 | Infected | Uninfected | Infected | High |
| 4 | Uninfected | Uninfected | Infected | Keep space |
| 5 | Infected | Infected | Uninfected | High |
| 6 | Uninfected | Uninfected | Uninfected | Safe |
Figure 5Classification based on image in deep convolution neural network.
Figure 6Rate detection vs. Fp rate.
Performance metrics based on categorization.
| Metrics | Category |
|
|
|
| Sensitivity | Precision | Accuracy | Specificity | Overall |
|---|---|---|---|---|---|---|---|---|---|---|
| Proposed framework | Critical | 115 | 530 | 2 | 2 | 0.96 | 0.96 | 98% | 0.99 | 122 |
| Severe | 140 | 506 | 4 | 5 | 0.97 | 0.97 | 99% | 0.99 | 140 | |
| Moderate | 180 | 446 | 15 | 10 | 0.95 | 0.92 | 96% | 0.97 | 185 | |
| Mild | 187 | 444 | 9 | 13 | 0.93 | 0.94 | 96% | 0.97 | 205 |
Outcome measures based on fold.
| Foldone | FoldTwo | FoldThree | FoldFour | Fold Five | Typical | |
|---|---|---|---|---|---|---|
| Performance | ||||||
| Specificity | 98.4 | 99.1 | 98.8 | 98.0 | 97.9 | 98 |
| Sensitivity | 95.7 | 96.8 | 94.8 | 97.5 | 95.5 | 96 |
| Precision | 95.5 | 99.8 | 98.8 | 98.0 | 97.7 | 98 |
| Accuracy | 95.4 | 96.0 | 94.7 | 96.9 | 94.6 | 95 |
| AUC | 0.997 | 0.991 | 0.978 | 0.994 | 0.997 | 0.987 |
Figure 7Performance metrics of proposed method based on various categories.
Figure 8Outcome measures based on each fold.
Loss and accuracy efficiency.
| T_L | T_A | V_L | V_A |
|---|---|---|---|
| 0.2866 | 0.8918 | 0.2326 | 0.9115 |
| 0.2698 | 0.9002 | 0.2235 | 0.9201 |
| 0.2552 | 0.9032 | 0.2195 | 0.9175 |
| 0.2618 | 0.9052 | 0.2218 | 0.9230 |
| 0.2368 | 0.9048 | 0.2028 | 0.9224 |
Figure 9Loss and accuracy efficiency.
Comparison of different methods.
| ResNet-101 | ResNet-36 | AlexNet | VGG | Deep convolutional network | |
|---|---|---|---|---|---|
| Accuracy | 80.01 | 82.86 | 76.75 | 89.35 | 95.55 |
| ACU | 0.8110 | 0.8912 | 0.750 | 0.8745 | 0.9875 |
| Elapsed time | 1075 | 880 | 1290 | 1708 | 725 |
Figure 10Comparison of different methods.