| Literature DB >> 35582662 |
Qinhua Hu1, Francisco Nauber B Gois2, Rafael Costa3, Lijuan Zhang4, Ling Yin5, Naercio Magaia6,7, Victor Hugo C de Albuquerque8,9.
Abstract
The COVID-19 pandemic continues to wreak havoc on the world's population's health and well-being. Successful screening of infected patients is a critical step in the fight against it, with radiology examination using chest radiography being one of the most important screening methods. For the definitive diagnosis of COVID-19 disease, reverse-transcriptase polymerase chain reaction remains the gold standard. Currently available lab tests may not be able to detect all infected individuals; new screening methods are required. We propose a Multi-Input Transfer Learning COVID-Net fuzzy convolutional neural network to detect COVID-19 instances from torso X-ray, motivated by the latter and the open-source efforts in this research area. Furthermore, we use an explainability method to investigate several Convolutional Networks COVID-Net forecasts in an effort to not only gain deeper insights into critical factors associated with COVID-19 instances, but also to aid clinicians in improving screening. We show that using transfer learning and pre-trained models, we can detect it with a high degree of accuracy. Using X-ray images, we chose four neural networks to predict its probability. Finally, in order to achieve better results, we considered various methods to verify the techniques proposed here. As a result, we were able to create a model with an AUC of 1.0 and accuracy, precision, and recall of 0.97. The model was quantized for use in Internet of Things devices and maintained a 0.95 percent accuracy.Entities:
Keywords: COVID-19; Intern of Things; Multi-input convolutional network; Soft computing; X-ray; XAI
Year: 2022 PMID: 35582662 PMCID: PMC9102011 DOI: 10.1016/j.asoc.2022.108966
Source DB: PubMed Journal: Appl Soft Comput ISSN: 1568-4946 Impact factor: 8.263
Fig. 1The second phase of proposed method.
Fig. 3X-ray samples with fuzzy filter.
Fig. 2Image samples of the chosen datasets.
Fig. 4Description of the dataset used in this study.
Results of experiment single input classification with K-Means cluster.
| Model | AUC | Accuracy | Precision | Recall |
|---|---|---|---|---|
| VGG16 | 0.861497 | 0.763889 | 0.711111 | 0.888889 |
| ResNet152V2 | 0.700231 | 0.527778 | 0.517241 | 0.833333 |
| InceptionV3 | 0.870370 | 0.500000 | 0.500000 | 1.000000 |
| EfficientNetB3 | 0.756944 | 0.680556 | 1.000000 | 0.361111 |
Results of experiment single input classification with Fuzzy.
| Model1 | AUC | Accuracy | Precision | Recall |
|---|---|---|---|---|
| VGG16 | 0.900463 | 0.597222 | 1.000000 | 0.194444 |
| ResNet152V2 | 0.951389 | 0.777778 | 0.692308 | 1.000000 |
| InceptionV3 | 0.912037 | 0.875000 | 0.829268 | 0.944444 |
| EfficientNetB3 | 0.992670 | 0.736111 | 1.000000 | 0.472222 |
Results of experiment single input classification without Fuzzy.
| Model1 | AUC | Accuracy | Precision | Recall |
|---|---|---|---|---|
| VGG16 | 0.989197 | 0.972222 | 0.972222 | 0.972222 |
| ResNet152V2 | 0.611111 | 0.611111 | 0.562500 | 1.000000 |
| InceptionV3 | 0.852623 | 0.847222 | 0.765957 | 1.000000 |
| EfficientNetB3 | 0.977238 | 0.916667 | 0.941176 | 0.888889 |
Fig. 5Loss and AUC by epoch results.
Results of experiment MultiInput classification without Fuzzy.
| Model1 | Model2 | AUC | Acc | Prec. | Recall |
|---|---|---|---|---|---|
| VGG16 | ResNet152V2 | 0.99 | 0.94 | 0.97 | 0.92 |
| VGG16 | InceptionV3 | 0.98 | 0.94 | 0.97 | 0.92 |
| VGG16 | EffNetB3 | 0.97 | 0.93 | 0.92 | 0.94 |
| ResNet152V2 | VGG16 | 0.99 | 0.94 | 0.97 | 0.92 |
| ResNet152V2 | InceptionV3 | 1.00 | 0.97 | 0.95 | 1.00 |
| ResNet152V2 | EffNetB3 | 0.99 | 0.88 | 0.80 | 1.00 |
| InceptionV3 | VGG16 | 0.98 | 0.94 | 0.97 | 0.92 |
| InceptionV3 | ResNet152V2 | 1.00 | 0.97 | 0.95 | 1.00 |
| InceptionV3 | EffNetB3 | 0.99 | 0.96 | 0.97 | 0.94 |
| EffNetB3 | VGG16 | 0.97 | 0.93 | 0.92 | 0.94 |
| EffNetB3 | ResNet152V2 | 0.99 | 0.88 | 0.80 | 1.00 |
| EffNetB3 | InceptionV3 | 0.99 | 0.96 | 0.97 | 0.94 |
Results of experiment MultiInput classification with Fuzzy.
| Model1 | Model2 | AUC | Acc | Prec. | Recall |
|---|---|---|---|---|---|
| VGG16 | ResNet152V2 | 1.00 | 0.94 | 1.00 | 0.89 |
| VGG16 | InceptionV3 | 1.00 | 0.97 | 0.97 | 0.97 |
| VGG16 | EfficientNetB3 | 0.88 | 0.72 | 0.64 | 1.00 |
| ResNet152V2 | VGG16 | 1.00 | 0.94 | 1.00 | 0.89 |
| ResNet152V2 | InceptionV3 | 0.99 | 0.96 | 0.92 | 1.00 |
| ResNet152V2 | EfficientNetB3 | 0.98 | 0.92 | 0.94 | 0.89 |
| InceptionV3 | VGG16 | 1.00 | 0.97 | 0.97 | 0.97 |
| InceptionV3 | ResNet152V2 | 0.99 | 0.96 | 0.92 | 1.00 |
| InceptionV3 | EfficientNetB3 | 0.96 | 0.94 | 0.97 | 0.92 |
| EfficientNetB3 | VGG16 | 0.88 | 0.72 | 0.64 | 1.00 |
| EfficientNetB3 | ResNet152V2 | 0.98 | 0.92 | 0.94 | 0.89 |
| EfficientNetB3 | InceptionV3 | 0.96 | 0.94 | 0.97 | 0.92 |
Comparative results between using images with and without fuzzyfilter.
| Fuzzy | AUC | Accuracy |
|---|---|---|
| ResNet152V2 | 0.951389 | 0.777778 |
| InceptionV3 | 0.912037 | 0.875000 |
| EfficientNetB3 | 0.992670 | 0.736111 |
| No Fuzzy | AUC | Accuracy |
| ResNet152V2 | 0.611111 | 0.611111 |
| InceptionV3 | 0.852623 | 0.847222 |
| EfficientNetB3 | 0.977238 | 0.916667 |
Comparative of experiment results with multiInput and single input classification with fuzzy (mean values by network).
| Multi input | AUC | Acc | Precision | Recall |
| VGG16 | 0.97 | 0.90 | 0.90 | 0.96 |
| ResNet152V2 | 0.99 | 0.94 | 0.95 | 0.94 |
| InceptionV3 | 0.99 | 0.96 | 0.96 | 0.97 |
| EfficientNetB3 | 0.95 | 0.88 | 0.88 | 0.93 |
| Single Input | AUC | Acc | Precision | Recall |
| VGG16 | 0.90 | 0.60 | 1.00 | 0.19 |
| ResNet152V2 | 0.95 | 0.78 | 0.69 | 1.00 |
| InceptionV3 | 0.91 | 0.88 | 0.83 | 0.94 |
| EfficientNetB3 | 0.99 | 0.74 | 1.00 | 0.47 |
F1 score of the ResNet152V2-VGG16 model.
| Precision | Recall | F1-score | Support | |
|---|---|---|---|---|
| 1 | 0.94 | 0.97 | 36 | |
| 0.95 | 1 | 0.97 | 36 | |
| 0.97 | 72 | |||
| 0.97 | 0.97 | 0.97 | 72 | |
| 0.97 | 0.97 | 0.97 | 72 |
Fig. 6Loss and AUC by epoch results.
Fig. 7Class Activation Map of SARS sample X-ray.