Hamidreza Hosseinzadeh1. 1. Department of Electrical Engineering, North Tehran Branch, Islamic Azad University, Tehran, Iran.
Abstract
AIM: COVID-19 is a pandemic infectious disease which has influenced the life and health of many communities since December 2019. Due to the rapid worldwide spread of this highly contagious disease, making its early detection with high accuracy important for breaking the chain of transition. X-ray images of COVID-19 patients, reveal specific abnormalities associated with this disease. METHODS: In this study, a multi-view feature learning method for detecting COVID-19 based on chest X-ray images is presented. This method provides a framework for exploiting the multiple types of deep features, which is able to preserve both the correlative and the complementary information, and achieve accurate detection at the classification phase. Deep features are extracted using pre-trained deep CNN models of AlexNet, GoogleNet, ResNet50, SqueezeNet, and VGG19. The learned feature representation of X-ray images are then classified using ELM. RESULTS: The experiments show that our method achieves accuracy scores of 100%, 99.82%, and 99.82% in detecting three classes of COVID-19, normal, and pneumonia, respectively. The sensitivities of three classes are 100%, 100%, and 99.45%, respectively. The specificities of three classes are 100%, 99.73%, and 100%, respectively. The precision values of three classes are 100%, 99.45%, and 100%, respectively. The F-scores of three classes are 100%, 99.73%, and 99.72%, respectively. The overall accuracy score of our method is 99.82%. CONCLUSIONS: The results demonstrate the effectiveness of our method in detecting COVID-19 cases and can therefore assist experts in early diagnosis based on X-ray images.
AIM: COVID-19 is a pandemic infectious disease which has influenced the life and health of many communities since December 2019. Due to the rapid worldwide spread of this highly contagious disease, making its early detection with high accuracy important for breaking the chain of transition. X-ray images of COVID-19 patients, reveal specific abnormalities associated with this disease. METHODS: In this study, a multi-view feature learning method for detecting COVID-19 based on chest X-ray images is presented. This method provides a framework for exploiting the multiple types of deep features, which is able to preserve both the correlative and the complementary information, and achieve accurate detection at the classification phase. Deep features are extracted using pre-trained deep CNN models of AlexNet, GoogleNet, ResNet50, SqueezeNet, and VGG19. The learned feature representation of X-ray images are then classified using ELM. RESULTS: The experiments show that our method achieves accuracy scores of 100%, 99.82%, and 99.82% in detecting three classes of COVID-19, normal, and pneumonia, respectively. The sensitivities of three classes are 100%, 100%, and 99.45%, respectively. The specificities of three classes are 100%, 99.73%, and 100%, respectively. The precision values of three classes are 100%, 99.45%, and 100%, respectively. The F-scores of three classes are 100%, 99.73%, and 99.72%, respectively. The overall accuracy score of our method is 99.82%. CONCLUSIONS: The results demonstrate the effectiveness of our method in detecting COVID-19 cases and can therefore assist experts in early diagnosis based on X-ray images.
COVID-19 is an infectious respiratory disease, which has rapidly spread all across the world and caused the death of hundreds of thousands and infected millions of people [1], [2], [3], [4]. COVID-19 has been announced as a pandemic by the World Health Organization (WHO) [5], [6]. One of the important steps to prevent the transmission of COVID-19 infection to healthy population is to effectively screen infected patients to isolate and treat them.Currently, reverse transcription-polymerase chain reaction (RT-PCR) taken from the respiratory tract is the main screening method for COVID-19 [7]. However, it is a costly and time consuming detection method. Those detection tests that give results within minutes are insensitive and occasionally yield false negatives. On the other hand, RT-PCR test kits are limited in number. Therefore, the development of a fast, low cost, and reliable method for automatic detection of COVID-19 is essential. As an alternative to the RT-PCR testing method, artificial intelligence-based automated detection of COVID-19 from radiological imaging of patients can be used [8], [9], [10]. Radiographic imaging of the chest, such as computed tomography (CT) and X-ray, is helpful for early COVID-19 detection. In this study, X-ray is preferred to the CT due to cost-effective, low ionizing radiation exposure to patients, and widespread availability of X-ray machines in almost every hospital.Ismael and Sengür [11] presented three deep convolutional neural networks (CNN) approaches including extraction of deep features, fine-tuned pre-trained CNN models, and end-to-end trained CNN model to detect COVID-19 cases. In the first approach, SVM classifier with various kernel functions was used to classify the deep features. Uçar et al. [12] presented a deep learning method in which extracted deep features from X-ray images on different color spaces were applied to a bidirectional LSTM network for classification the data into COVID-19 or pneumonia. In [13], features of the convolution and fully connected layers of the CNN-based AlexNet model were extracted and combined. The important features were selected by the Relief algorithm and then fed into the SVM classifier for detecting COVID-19 cases. In [14], a CNN model based on class decomposition and transfer learning was developed to detect COVID-19 cases. Khan et al. [15] presented an Xception deep learning architecture named CoroNet for the detection of COVID-19 cases. In [16], deep features were extracted from MobileNetV2 and SqueezeNet models, and then the feature sets obtained were processed with social mimic optimization (SMO) for feature selection and combination. Finally, the combined feature set was fed into a SVM classifier to detect COVID-19 cases. Elkorany and Elsharkawy [17] presented a COVID-19 detection method in which the ShuffleNet and SqueezeNet models were employed as a feature extractor and the SVM used as classifier. Ashour et al. [18] presented an ensemble-based bag-of-features (BoF) model to detect COVID-19 and normal cases. They utilized the grid method, and SURF descriptor for the keypoints determination and their feature vectors extraction, respectively. In [19], CNN based transfer learning approaches were used to detect COVID-19 cases. The overall accuracy score of 94.72% was achieved using MobileNetV2. Canayaz [20] used two meta-heuristic algorithms containing binary particle swam optimization (BPSO) and binary gray wolf optimization (BGWO) to select the efficient features among the features extracted from deep CNN models. The selected features were then applied to the SVM to classify the data as COVID-19, normal, and pneumonia cases. The image contrast enhancement algorithm was also used to preprocess the image set. Wang et al. [21] presented an FGCNet model to fuse the extracted features from graph convolutional network (GCN) and CNN for detecting COVID-19 based on CT images. In [22], deep features were extracted from CT images with different pre-trained models, and then the best two features were fused using discriminant correlation analysis. Hasoon et al. [23] presented a method in which the LBP, HOG, and Haralick features were used as features and fed into the KNN and SVM classifiers to detect COVID-19 cases. Al-Waisy et al. [24] presented a COVID-DeepNet model for detecting COVID-19 cases, which fuses the predictions of two deep learning models to make the final decision. Another fusion of deep learning models, COVID-CheXNet, was presented in [25]. In [26], a comprehensive investigation was presented to detect COVID-19 cases using machine learning and convolutional deep learning models. In [27], a feature-fusion based approach was presented to sort COVID-19 related medical waste.Most previous studies discussed use a single pre-trained CNN model. The extracted features from different pre-trained CNN models often describe information of the same image from different views. Therefore, we propose using a multi-view feature learning framework to exploit useful information from different views so that more comprehensive feature representation may be learned for the diagnosis of COVID-19. The main contributions of this study are as follows:A deep multi-view feature learning method for detecting COVID-19 based on X-ray images is presented.Deep features are extracted using pre-trained deep CNN models of AlexNet, GoogleNet, ResNet50, SqueezeNet, and VGG19.The method transforms the multi-view feature space into a feature space where both the complementary and the correlative information of different views are preserved.Accurate classification of the feature representation of images into COVID-19, normal, and pneumonia cases is performed using ELM classifier.The organization of the paper is as follows. Section 2 demonstrates the dataset and introduces the proposed method used for COVID-19 detection. Section 3 describes the experimental analysis of the study and discussions. Finally, Section 4 provides the conclusion and comments on further extension of this method.
Materials and methods
Dataset
The dataset used here consists of COVID-19, normal, and pneumonia chest X-ray images [20]. This dataset was a collection of 1092 images with three classes created from publicly available chest X-ray datasets [28], [29], [30]. The number of images of each class was 364. Fig. 1
shows sample X-ray images of the dataset with their corresponding marked lesions.
Fig. 1
Samples of (a) COVID-19, (b) normal, and (c) pneumonia chest X-ray images with marked lesions.
Samples of (a) COVID-19, (b) normal, and (c) pneumonia chest X-ray images with marked lesions.
Deep feature extraction
Deep learning using CNN has obtained great success in diagnosing disease using medical image analysis [31], [32]. The diagnosis of COVID-19 using CNN has become a popular research technique due to the extraction of powerful features. In medical image classification for rare or emerging diseases, enough labeled images are not available to train a CNN model from scratch. In such cases, ImageNet pre-trained CNN models can be used [33], [34], [35], [36], [37].In this study, the pre-trained CNN model is used as feature extractor to construct the image feature space. The extracted features from different types of pre-trained CNN models generally characterize different description of images. Therefore, we throw the images into different pre-trained CNN models to construct the multi-view feature space for each image by considering the extracted features from each model as a view.Then, the features of all views are concatenated and fed into the presented multi-view feature learning method to achieve more comprehensive features.
Given a training data matrix from classes, where and is the number of samples. is the corresponding label matrix of , where each is the label vector in which only the element associated with the assigned class is 1 and all the others are −1.The optimization problem of least square regression (LSR) can be written as follows:where is a transformation matrix, is an all one vector, is an intercept vector, and is a trade-off parameter.Considering the label matrix as regression target is too strict and inappropriate for classification, since it leads to wrong penalization for correct classification that are far from
[38], [39]. In order to solve this issue, the regression target is reformulated as where is a non-negative adjustment matrix, and is a Hadamard product operator. This strategy can enlarge the distance between the true and the false classes. The optimization problem (1) can rewritten as follows:The optimization problem (2) can be extended to the multi-view scenario, where different views of the data are reflected by different types of features. Multi-view learning provides a mechanism for exploiting multiple types of features [40], [41], [42]. Let denote a data matrix from the -th view, where is the feature space dimension for the -th view. The optimization problem for multi-view learning scenario is expressed as follows:where is a set of positive weight parameters. For simplicity, the weight parameter is merged into the transformation matrix as . In addition, and of all views are collected into and respectively, where is the dimension of features across all views. The optimization problem (3) can then be rewritten as follows:The optimization problem (4) is convex due to the convexity of both terms. The convexity implies that the optimal solution for the parameters , , , and exists. Therefore, an effective procedure is developed to iteratively update parameters , , , and to obtain the optimal solution [39]. The algorithm is detailed as follows:Updating the parameters and during the -th iteration: With the fixed parameters and , the optimization problem (4) can be rewritten aswhere and . The optimal and during the -th iteration can be derived by setting the partial deviations of the objective function to zero. The optimal solution with respect to isAlso, the optimal solution with respect to iswhere is a diagonal matrix in which indicates the reciprocal weight of the -th feature in during the -th iteration, and in which is an identity matrix and is an all one matrix.Updating the adjustment matrix during the -th iteration: When the parameters , , and are fixed, the optimization problem (4) can be rewritten asThe optimal solution with respect to iswhere denotes the predicted labels, computes element-wise absolute values of a matrix, and is an all one matrix.Updating the adaptive weight parameter during the -th iteration: With the fixed parameters , , and , the optimization problem (4) can be rewritten aswhere is the view index. The optimal solution with respect to isThe obtained weight parameters can be further used to calculate new weighted transformation matrix in the next iteration. The algorithm is terminated when , where .Once the weighted transformation matrix and intercept vector have been learned, the new representation for the training dataset can be achieved using (12), which is used to train a classifier.In the testing phase, the weighted transformation matrix and intercept vector are used to compute the new representation of the test dataset, and the trained classifier is then used to classify the test dataset. In this algorithm, the Covid-19 detection is done based on multi-view deep feature learning, so we named it MV-COVIDet. In this study, we use extreme learning machine (ELM) [43] as classifier due to its fast learning procedure and remarkable generalization performance.
Experiments
We first introduce the experimental setup and then perform experiments on the X-ray images data to assess the effectiveness of the MV-COVIDet method in detecting COVID-19, normal, and pneumonia cases. Finally, the results of our method are compared with previous studies.
Experimental setup
The five types of pre-trained models are considered to extract deep learning based features (e.g.
= 5). They are AlexNet, GoogleNet, ResNet50, SqueezeNet, and VGG19. Deep features are extracted from the fully connected layer “fc8” in the AlexNet and VGG19 model, while the fully connected layer “loss3-classifier” is used in the GoogleNet model, the fully connected layer “fc1000” is used in the ReNet50 model, and the pooling layer “pool10” is used in the SqueezeNet model. The input image size of all models is 224 224, except for the AlexNet and SqueezeNet models in which the input image size is 227 227. The training of each model is realized in 50 epoch, and the mini-batch size is 64. The stochastic gradient descent (SGD) optimization algorithm with an initial learning rate of is used as solver.In the experiments, the number of hidden neurons of ELM is empirically set as and the sigmoid function is adopted as an activation function. The trade-off parameter is tuned using a grid search technique from the set . The termination parameter is set as .The 5-fold cross-validation technique is used to evaluate the performance of the MV-COVIDet method. The performance metrics used for the analysis of the experimental results are sensitivity (Se), specificity (Sp), precision (Pre), F-score, accuracy (Acc), and overall accuracy (Overall Acc). The metrics are formulated as follows:Here, TP and TN show the number of correctly detected positive and negative images, while FP and FN show the number of misdetected ones.
Experimental results
In the first set of experiments, the extracted deep features from AlexNet, VGG19, GoogleNet, ResNet50, and SqueezeNet models are applied individually as the input to MV-COVIDet method to assess the performance of the method in single-view classification. Besides, the classification results of ELM classifier applied on each extracted deep feature is taken as the baseline. The experimental results are shown in Table 1, Table 2
.
Table 1
The performance metrics of the MV-COVIDet method.
Model
Classes
Se. (%)
Sp. (%)
Pre. (%)
F-score (%)
Acc. (%)
Overall Acc. (%)
AlexNet
COVID-19
99.45
99.73
99.45
99.45
99.63
98.81
Normal
99.45
98.90
97.84
98.64
99.08
Pneumonia
97.53
99.59
99.16
98.34
98.90
VGG19
COVID-19
99.18
99.86
99.72
99.45
99.63
98.99
Normal
99.45
99.18
98.37
98.91
99.27
Pneumonia
98.35
99.45
98.90
98.62
99.08
GoogleNet
COVID-19
98.90
99.59
99.17
99.04
99.36
97.16
Normal
98.08
97.53
95.20
96.62
97.71
Pneumonia
94.51
98.63
97.18
95.82
97.25
ResNet50
COVID-19
98.63
99.59
99.17
98.90
99.27
97.89
Normal
99.18
97.94
96.01
97.57
98.35
Pneumonia
95.88
99.31
98.59
97.21
98.17
SqueezeNet
COVID-19
98.63
99.59
99.17
98.90
99.27
98.08
Normal
98.90
98.49
97.04
97.96
98.63
Pneumonia
96.70
99.04
98.05
97.37
98.26
Table 2
The performance metrics of the ELM classifier.
Model
Classes
Se. (%)
Sp. (%)
Pre. (%)
F-score (%)
Acc. (%)
Overall Acc. (%)
AlexNet
COVID-19
98.08
99.31
98.62
98.35
98.90
96.61
Normal
98.35
97.12
94.46
96.37
97.53
Pneumonia
93.41
98.49
96.87
95.10
96.79
VGG19
COVID-19
99.18
99.86
99.72
99.45
99.63
98.53
Normal
99.18
98.63
97.30
98.23
98.81
Pneumonia
97.25
99.31
98.61
97.93
98.63
GoogleNet
COVID-19
96.98
99.04
98.06
97.51
98.35
94.69
Normal
97.53
94.92
90.56
93.92
95.79
Pneumonia
89.56
98.08
95.88
92.61
95.24
ResNet50
COVID-19
97.53
98.35
96.73
97.13
98.08
94.87
Normal
97.53
96.02
92.45
94.92
96.52
Pneumonia
89.56
97.94
95.60
92.48
95.15
SqueezeNet
COVID-19
95.88
98.08
96.14
96.01
97.34
92.86
Normal
95.33
94.51
89.66
92.41
94.78
Pneumonia
87.36
96.70
92.98
90.08
93.59
The performance metrics of the MV-COVIDet method.The performance metrics of the ELM classifier.Table 1, Table 2 show the performance of MV-COVIDet method and ELM classifier in terms of sensitivity, specificity, precision, F-score, accuracy, and overall accuracy, respectively. As we can see in Table 1, the MV-COVIDet method fed with deep features yielded promising results. VGG19 features achieved the highest overall accuracy score of 98.99%, while the AlexNet features achieved an overall accuracy score of 98.81% as the second best score, and the SqueezeNet features achieved an overall accuracy score of 98.08% as the third best score. The percentage of success for classifying COVID-19 cases using VGG19, AlexNet, and SqueezeNet features are 99.63%, 99.63%, and 99.27%, respectively. Table 2 shows that the overall accuracy scores achieved by VGG19, AlexNet, and SqueezeNet features are 98.53%, 96.61%, and 92.86%, respectively. It is observed that the MV-COVIDet method provides successful enhancement in detecting COVID-19, normal, and pneumonia cases than applying ELM classifier on extracted deep features. The higher performance of MV-COVIDet method is related to the nature of learned feature representation.The confusion matrix obtained using the MV-COVIDet method for the VGG19 features is shown in Fig. 2
.
Fig. 2
Confusion matrix of MV-COVIDet method for VGG19 features.
Confusion matrix of MV-COVIDet method for VGG19 features.Fig. 2 indicates the confusion between COVID-19, normal, and pneumonia cases. It can be seen that while 361 COVID-19, 362 normal, and 358 pneumonia samples are correctly classified, 3 COVID-19, 2 normal, and 6 pneumonia samples are misclassified. Hence, the MV-COVIDet method can be well used for performing detection and classification of COVID-19, normal, and pneumonia cases.The overall accuracy scores obtained for MV-COVIDet method and ELM classifier on the deep features are also visualized as a bar chart in Fig. 3
.
Fig. 3
Overall accuracy scores of MV-COVIDet and ELM on the deep features.
Overall accuracy scores of MV-COVIDet and ELM on the deep features.As observed in Fig. 3, MV-COVIDet method produced the top three best-performed results on extracted deep features from VGG19, AlexNet, and SqueezeNet models. Therefore, in the second set of experiments, the focus is on the concatenation of these types of deep features with each other to construct multi-view data. The analysis results of MV-COVIDet method for multi-view data are shown in Table 3
.
Table 3
The performance metrics of the MV-COVIDet method on multi-view data.
Model
Classes
Se. (%)
Sp. (%)
Pre. (%)
F-score (%)
Acc. (%)
Overall Acc. (%)
AlexNet & VGG19
COVID-19
98.90
99.86
99.72
99.31
99.54
99.08
Normal
100
99.18
98.38
99.18
99.45
Pneumonia
98.35
99.59
99.17
98.76
99.18
AlexNet & SqueezeNet
COVID-19
99.73
99.86
99.73
99.73
99.82
99.45
Normal
99.73
99.59
99.18
99.45
99.63
Pneumonia
98.90
99.73
99.45
99.17
99.45
SqueezeNet & VGG19
COVID-19
100
99.86
99.73
99.86
99.91
99.54
Normal
99.73
99.59
99.18
99.45
99.63
Pneumonia
98.90
99.86
99.72
99.31
99.54
AlexNet & SqueezeNet & VGG19
COVID-19
100
100
100
100
100
99.73
Normal
99.73
99.73
99.45
99.59
99.73
Pneumonia
99.45
99.86
99.72
99.59
99.73
The performance metrics of the MV-COVIDet method on multi-view data.The reported metric values in Table 3 indicates that the MV-COVIDet method using multiple types of features generally performs better compared to using single feature type. This confirms the logic of combining multiple types of features. It is seen that increasing the number of types of features improves the classification performance. The MV-COVIDet method on the concatenation of AlexNet, SqueezeNet, and VGG19 features ensured satisfactory results with an overall accuracy of 99.73%.In the third set of experiments, we examined the effect of constructing multi-view data with top four feature types on the performance of the MV-COVIDet method. The analysis results of MV-COVIDet method on four-view data are shown in Table 4
.
Table 4
The performance metrics of the MV-COVIDet method on four-view data.
Model
Classes
Se. (%)
Sp. (%)
Pre. (%)
F-score (%)
Acc. (%)
Overall Acc. (%)
AlexNet & ResNet50 & SqueezeNet & VGG19
COVID-19
100
100
100
100
100
99.82
Normal
100
99.73
99.45
99.73
99.82
Pneumonia
99.45
100
100
99.72
99.82
The performance metrics of the MV-COVIDet method on four-view data.From Table 4, it can be observed that increasing the number of types of features from 3 to 4 results in a slight performance improvement for MV-COVIDet method, thus indicating that the method is nearly saturated. Therefore, we do not use more feature types to construct multi-view data. Fig. 4
shows the confusion matrix obtained using the MV-COVIDet method for the concatenated features of AlexNet, ResNet50, SqueezeNet, and VGG19. From Fig. 4, it can be observed that only 2 pneumonia samples are misclassified and the other samples are correctly classified, which signifies the robustness of the method.
Fig. 4
Confusion matrix of MV-COVIDet method for the concatenated features of AlexNet, ResNet50, SqueezeNet, and VGG19.
Confusion matrix of MV-COVIDet method for the concatenated features of AlexNet, ResNet50, SqueezeNet, and VGG19.The metric values of applying the MV-COVIDet method and ELM classifier on the concatenation of AlexNet, ResNet50, SqueezeNet, and VGG19 features in the form of bar chart are shown in Fig. 5
. The results of ELM classifier are taken for comparison.
Fig. 5
Metric values obtained using the (a) MV-COVIDet and (b) ELM on the concatenation of AlexNet, ResNet50, SqueezeNet, and VGG19 features.
Metric values obtained using the (a) MV-COVIDet and (b) ELM on the concatenation of AlexNet, ResNet50, SqueezeNet, and VGG19 features.Fig. 5 confirms that the MV-COVIDet method outperformed the ELM classifier on the concatenation of feature types, because MV-COVIDet learns a feature representation which preserves the correlative and the complementary information in a low-dimensional discriminative feature space. The preservation of correlative and complementary information in multiple feature types is done with the help of adaptive weights.In the fourth set of experiments, the performance metrics of the MV-COVIDet method with and without adaptive weights are compared to evaluate the effectiveness of the adaptive weights. In this scenario, we used the concatenation of AlexNet, ResNet50, SqueezeNet, and VGG19 features. The experimental results are shown in Fig. 6
. It can be seen that the MV-COVIDet method with the adaptive weights performs better than the one without the adaptive weights. It demonstrates the effectiveness of the adaptive weights.
Fig. 6
Performance comparison of the MV-COVIDet method with and without adaptive weights.
Performance comparison of the MV-COVIDet method with and without adaptive weights.
Comparson with previous studies
Finally, the MV-COVIDet performance is compared with previous studies performed on X-ray images. It is important to know that a fair comparison of results is not possible due to differences in datasets, methods, and validation techniques. The comparison results are shown in Table 5
.
Table 5
Comparison of MV-COVIDet method with previous studies.
Studies
Number of cases
Overall Acc. (%)
Ismael & Sengür [11]
180 COVID-19, 200 Normal
94.70
Turkoglu [13]
219 COVID-19, 1583 Normal, 4290 Pneumonia
99.18
Khan et al. [15]
284 COVID-19, 310 Normal, 657 Pneumonia
95.00
Toğaçar et al. [16]
295 COVID-19, 65 Normal, 98 Pneumonia
99.27
Ashour et al. [18]
200 COVID-19, 200 Normal
98.60
Apostolopoulos & Mpesiana [19]
224 COVID, 504 Normal, 714 Pneumonia
94.72
Canayaz [20]
364 COVID-19, 364 Normal, 364 Pneumonia
99.38
MV-COVIDet
364 COVID-19, 364 Normal, 364 Pneumonia
99.82
Comparison of MV-COVIDet method with previous studies.It can be observed from Table 5 that the MV-COVIDet method outperforms other methods in terms of overall accuracy score, validating the importance of effectively exploiting the multiple types of deep features. Since the dataset we used in this study is the same as the dataset in the study of [20], a one-to-one comparison is performed only with this study. In [20], the image contrast enhancement algorithm (ICEA) was performed on X-ray images to provide better quality ones. As can be observed, [20] yielded an overall accuracy score of 99.38% while our method reached an overall accuracy score of 99.82% without any use of image processing technique.
Conclusion
The outbreak of COVID-19 has put a major pressure on health centers, preventing them from providing effective treatment without the risk of infection. In this study, a method called MV-COVIDet was presented for detecting COVID-19 based on X-ray images which can help medical clinicians in making appropriate decisions for diagnosis. This method leverage multiple types of deep features extracted from X-ray images to learn an efficient feature representation and feed them into the ELM classifier to classify the data into COVID-19, normal, and pneumonia cases. The MV-COVIDet method was evaluated with different types of deep features. Simulation results showed that our method achieved the highest overall accuracy score of 99.82% on the concatenated features of AlexNet, ResNet50, SqueezeNet, and VGG19. The accuracy scores of COVID-19, normal, and pneumonia classes were 100%, 99.82%, and 99.82%, respectively. The comparison of our method with the previous ones indicated its superiority in terms of overall accuracy score. The limitation of the MV-COVIDet method is that it cannot properly handle a set of X-ray and CT images. In the future, we plan to expand the image collection to include X-ray and CT images of lung diseases and assess the performance of the presented method on it. In addition, we intend to extend this method for a scenario where only a limited number of data is labeled.
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Authors: Geert Litjens; Thijs Kooi; Babak Ehteshami Bejnordi; Arnaud Arindra Adiyoso Setio; Francesco Ciompi; Mohsen Ghafoorian; Jeroen A W M van der Laak; Bram van Ginneken; Clara I Sánchez Journal: Med Image Anal Date: 2017-07-26 Impact factor: 8.545
Authors: Zaid Abdi Alkareem Alyasseri; Mohammed Azmi Al-Betar; Iyad Abu Doush; Mohammed A Awadallah; Ammar Kamal Abasi; Sharif Naser Makhadmeh; Osama Ahmad Alomari; Karrar Hameed Abdulkareem; Afzan Adam; Robertas Damasevicius; Mazin Abed Mohammed; Raed Abu Zitar Journal: Expert Syst Date: 2021-07-28 Impact factor: 2.812
Authors: Amira S Ashour; Merihan M Eissa; Maram A Wahba; Radwa A Elsawy; Hamada Fathy Elgnainy; Mohamed Saeed Tolba; Waleed S Mohamed Journal: Biomed Signal Process Control Date: 2021-04-20 Impact factor: 3.880
Authors: Alma Rodríguez; Erik Cuevas; Daniel Zaldivar; Bernardo Morales-Castañeda; Ram Sarkar; Essam H Houssein Journal: Comput Biol Med Date: 2022-07-19 Impact factor: 6.698