Literature DB >> 35222680

Deep multi-view feature learning for detecting COVID-19 based on chest X-ray images.

Hamidreza Hosseinzadeh1.   

Abstract

AIM: COVID-19 is a pandemic infectious disease which has influenced the life and health of many communities since December 2019. Due to the rapid worldwide spread of this highly contagious disease, making its early detection with high accuracy important for breaking the chain of transition. X-ray images of COVID-19 patients, reveal specific abnormalities associated with this disease.
METHODS: In this study, a multi-view feature learning method for detecting COVID-19 based on chest X-ray images is presented. This method provides a framework for exploiting the multiple types of deep features, which is able to preserve both the correlative and the complementary information, and achieve accurate detection at the classification phase. Deep features are extracted using pre-trained deep CNN models of AlexNet, GoogleNet, ResNet50, SqueezeNet, and VGG19. The learned feature representation of X-ray images are then classified using ELM.
RESULTS: The experiments show that our method achieves accuracy scores of 100%, 99.82%, and 99.82% in detecting three classes of COVID-19, normal, and pneumonia, respectively. The sensitivities of three classes are 100%, 100%, and 99.45%, respectively. The specificities of three classes are 100%, 99.73%, and 100%, respectively. The precision values of three classes are 100%, 99.45%, and 100%, respectively. The F-scores of three classes are 100%, 99.73%, and 99.72%, respectively. The overall accuracy score of our method is 99.82%.
CONCLUSIONS: The results demonstrate the effectiveness of our method in detecting COVID-19 cases and can therefore assist experts in early diagnosis based on X-ray images.
© 2022 Published by Elsevier Ltd.

Entities:  

Keywords:  COVID-19; Deep learning; Extreme learning machine; Multi-view learning; X-ray

Year:  2022        PMID: 35222680      PMCID: PMC8864146          DOI: 10.1016/j.bspc.2022.103595

Source DB:  PubMed          Journal:  Biomed Signal Process Control        ISSN: 1746-8094            Impact factor:   3.880


Introduction

COVID-19 is an infectious respiratory disease, which has rapidly spread all across the world and caused the death of hundreds of thousands and infected millions of people [1], [2], [3], [4]. COVID-19 has been announced as a pandemic by the World Health Organization (WHO) [5], [6]. One of the important steps to prevent the transmission of COVID-19 infection to healthy population is to effectively screen infected patients to isolate and treat them. Currently, reverse transcription-polymerase chain reaction (RT-PCR) taken from the respiratory tract is the main screening method for COVID-19 [7]. However, it is a costly and time consuming detection method. Those detection tests that give results within minutes are insensitive and occasionally yield false negatives. On the other hand, RT-PCR test kits are limited in number. Therefore, the development of a fast, low cost, and reliable method for automatic detection of COVID-19 is essential. As an alternative to the RT-PCR testing method, artificial intelligence-based automated detection of COVID-19 from radiological imaging of patients can be used [8], [9], [10]. Radiographic imaging of the chest, such as computed tomography (CT) and X-ray, is helpful for early COVID-19 detection. In this study, X-ray is preferred to the CT due to cost-effective, low ionizing radiation exposure to patients, and widespread availability of X-ray machines in almost every hospital. Ismael and Sengür [11] presented three deep convolutional neural networks (CNN) approaches including extraction of deep features, fine-tuned pre-trained CNN models, and end-to-end trained CNN model to detect COVID-19 cases. In the first approach, SVM classifier with various kernel functions was used to classify the deep features. Uçar et al. [12] presented a deep learning method in which extracted deep features from X-ray images on different color spaces were applied to a bidirectional LSTM network for classification the data into COVID-19 or pneumonia. In [13], features of the convolution and fully connected layers of the CNN-based AlexNet model were extracted and combined. The important features were selected by the Relief algorithm and then fed into the SVM classifier for detecting COVID-19 cases. In [14], a CNN model based on class decomposition and transfer learning was developed to detect COVID-19 cases. Khan et al. [15] presented an Xception deep learning architecture named CoroNet for the detection of COVID-19 cases. In [16], deep features were extracted from MobileNetV2 and SqueezeNet models, and then the feature sets obtained were processed with social mimic optimization (SMO) for feature selection and combination. Finally, the combined feature set was fed into a SVM classifier to detect COVID-19 cases. Elkorany and Elsharkawy [17] presented a COVID-19 detection method in which the ShuffleNet and SqueezeNet models were employed as a feature extractor and the SVM used as classifier. Ashour et al. [18] presented an ensemble-based bag-of-features (BoF) model to detect COVID-19 and normal cases. They utilized the grid method, and SURF descriptor for the keypoints determination and their feature vectors extraction, respectively. In [19], CNN based transfer learning approaches were used to detect COVID-19 cases. The overall accuracy score of 94.72% was achieved using MobileNetV2. Canayaz [20] used two meta-heuristic algorithms containing binary particle swam optimization (BPSO) and binary gray wolf optimization (BGWO) to select the efficient features among the features extracted from deep CNN models. The selected features were then applied to the SVM to classify the data as COVID-19, normal, and pneumonia cases. The image contrast enhancement algorithm was also used to preprocess the image set. Wang et al. [21] presented an FGCNet model to fuse the extracted features from graph convolutional network (GCN) and CNN for detecting COVID-19 based on CT images. In [22], deep features were extracted from CT images with different pre-trained models, and then the best two features were fused using discriminant correlation analysis. Hasoon et al. [23] presented a method in which the LBP, HOG, and Haralick features were used as features and fed into the KNN and SVM classifiers to detect COVID-19 cases. Al-Waisy et al. [24] presented a COVID-DeepNet model for detecting COVID-19 cases, which fuses the predictions of two deep learning models to make the final decision. Another fusion of deep learning models, COVID-CheXNet, was presented in [25]. In [26], a comprehensive investigation was presented to detect COVID-19 cases using machine learning and convolutional deep learning models. In [27], a feature-fusion based approach was presented to sort COVID-19 related medical waste. Most previous studies discussed use a single pre-trained CNN model. The extracted features from different pre-trained CNN models often describe information of the same image from different views. Therefore, we propose using a multi-view feature learning framework to exploit useful information from different views so that more comprehensive feature representation may be learned for the diagnosis of COVID-19. The main contributions of this study are as follows: A deep multi-view feature learning method for detecting COVID-19 based on X-ray images is presented. Deep features are extracted using pre-trained deep CNN models of AlexNet, GoogleNet, ResNet50, SqueezeNet, and VGG19. The method transforms the multi-view feature space into a feature space where both the complementary and the correlative information of different views are preserved. Accurate classification of the feature representation of images into COVID-19, normal, and pneumonia cases is performed using ELM classifier. The organization of the paper is as follows. Section 2 demonstrates the dataset and introduces the proposed method used for COVID-19 detection. Section 3 describes the experimental analysis of the study and discussions. Finally, Section 4 provides the conclusion and comments on further extension of this method.

Materials and methods

Dataset

The dataset used here consists of COVID-19, normal, and pneumonia chest X-ray images [20]. This dataset was a collection of 1092 images with three classes created from publicly available chest X-ray datasets [28], [29], [30]. The number of images of each class was 364. Fig. 1 shows sample X-ray images of the dataset with their corresponding marked lesions.
Fig. 1

Samples of (a) COVID-19, (b) normal, and (c) pneumonia chest X-ray images with marked lesions.

Samples of (a) COVID-19, (b) normal, and (c) pneumonia chest X-ray images with marked lesions.

Deep feature extraction

Deep learning using CNN has obtained great success in diagnosing disease using medical image analysis [31], [32]. The diagnosis of COVID-19 using CNN has become a popular research technique due to the extraction of powerful features. In medical image classification for rare or emerging diseases, enough labeled images are not available to train a CNN model from scratch. In such cases, ImageNet pre-trained CNN models can be used [33], [34], [35], [36], [37]. In this study, the pre-trained CNN model is used as feature extractor to construct the image feature space. The extracted features from different types of pre-trained CNN models generally characterize different description of images. Therefore, we throw the images into different pre-trained CNN models to construct the multi-view feature space for each image by considering the extracted features from each model as a view. Then, the features of all views are concatenated and fed into the presented multi-view feature learning method to achieve more comprehensive features.

Multi-view feature learning framework (MV-COVIDet)

Given a training data matrix from classes, where and is the number of samples. is the corresponding label matrix of , where each is the label vector in which only the element associated with the assigned class is 1 and all the others are −1. The optimization problem of least square regression (LSR) can be written as follows:where is a transformation matrix, is an all one vector, is an intercept vector, and is a trade-off parameter. Considering the label matrix as regression target is too strict and inappropriate for classification, since it leads to wrong penalization for correct classification that are far from [38], [39]. In order to solve this issue, the regression target is reformulated as where is a non-negative adjustment matrix, and is a Hadamard product operator. This strategy can enlarge the distance between the true and the false classes. The optimization problem (1) can rewritten as follows: The optimization problem (2) can be extended to the multi-view scenario, where different views of the data are reflected by different types of features. Multi-view learning provides a mechanism for exploiting multiple types of features [40], [41], [42]. Let denote a data matrix from the -th view, where is the feature space dimension for the -th view. The optimization problem for multi-view learning scenario is expressed as follows:where is a set of positive weight parameters. For simplicity, the weight parameter is merged into the transformation matrix as . In addition, and of all views are collected into and respectively, where is the dimension of features across all views. The optimization problem (3) can then be rewritten as follows: The optimization problem (4) is convex due to the convexity of both terms. The convexity implies that the optimal solution for the parameters , , , and exists. Therefore, an effective procedure is developed to iteratively update parameters , , , and to obtain the optimal solution [39]. The algorithm is detailed as follows: Updating the parameters and during the -th iteration: With the fixed parameters and , the optimization problem (4) can be rewritten aswhere and . The optimal and during the -th iteration can be derived by setting the partial deviations of the objective function to zero. The optimal solution with respect to is Also, the optimal solution with respect to iswhere is a diagonal matrix in which indicates the reciprocal weight of the -th feature in during the -th iteration, and in which is an identity matrix and is an all one matrix. Updating the adjustment matrix during the -th iteration: When the parameters , , and are fixed, the optimization problem (4) can be rewritten as The optimal solution with respect to iswhere denotes the predicted labels, computes element-wise absolute values of a matrix, and is an all one matrix. Updating the adaptive weight parameter during the -th iteration: With the fixed parameters , , and , the optimization problem (4) can be rewritten aswhere is the view index. The optimal solution with respect to is The obtained weight parameters can be further used to calculate new weighted transformation matrix in the next iteration. The algorithm is terminated when , where . Once the weighted transformation matrix and intercept vector have been learned, the new representation for the training dataset can be achieved using (12), which is used to train a classifier. In the testing phase, the weighted transformation matrix and intercept vector are used to compute the new representation of the test dataset, and the trained classifier is then used to classify the test dataset. In this algorithm, the Covid-19 detection is done based on multi-view deep feature learning, so we named it MV-COVIDet. In this study, we use extreme learning machine (ELM) [43] as classifier due to its fast learning procedure and remarkable generalization performance.

Experiments

We first introduce the experimental setup and then perform experiments on the X-ray images data to assess the effectiveness of the MV-COVIDet method in detecting COVID-19, normal, and pneumonia cases. Finally, the results of our method are compared with previous studies.

Experimental setup

The five types of pre-trained models are considered to extract deep learning based features (e.g.  = 5). They are AlexNet, GoogleNet, ResNet50, SqueezeNet, and VGG19. Deep features are extracted from the fully connected layer “fc8” in the AlexNet and VGG19 model, while the fully connected layer “loss3-classifier” is used in the GoogleNet model, the fully connected layer “fc1000” is used in the ReNet50 model, and the pooling layer “pool10” is used in the SqueezeNet model. The input image size of all models is 224 224, except for the AlexNet and SqueezeNet models in which the input image size is 227 227. The training of each model is realized in 50 epoch, and the mini-batch size is 64. The stochastic gradient descent (SGD) optimization algorithm with an initial learning rate of is used as solver. In the experiments, the number of hidden neurons of ELM is empirically set as and the sigmoid function is adopted as an activation function. The trade-off parameter is tuned using a grid search technique from the set . The termination parameter is set as . The 5-fold cross-validation technique is used to evaluate the performance of the MV-COVIDet method. The performance metrics used for the analysis of the experimental results are sensitivity (Se), specificity (Sp), precision (Pre), F-score, accuracy (Acc), and overall accuracy (Overall Acc). The metrics are formulated as follows: Here, TP and TN show the number of correctly detected positive and negative images, while FP and FN show the number of misdetected ones.

Experimental results

In the first set of experiments, the extracted deep features from AlexNet, VGG19, GoogleNet, ResNet50, and SqueezeNet models are applied individually as the input to MV-COVIDet method to assess the performance of the method in single-view classification. Besides, the classification results of ELM classifier applied on each extracted deep feature is taken as the baseline. The experimental results are shown in Table 1, Table 2 .
Table 1

The performance metrics of the MV-COVIDet method.

ModelClassesSe. (%)Sp. (%)Pre. (%)F-score (%)Acc. (%)Overall Acc. (%)
AlexNetCOVID-1999.4599.7399.4599.4599.6398.81
Normal99.4598.9097.8498.6499.08
Pneumonia97.5399.5999.1698.3498.90



VGG19COVID-1999.1899.8699.7299.4599.6398.99
Normal99.4599.1898.3798.9199.27
Pneumonia98.3599.4598.9098.6299.08



GoogleNetCOVID-1998.9099.5999.1799.0499.3697.16
Normal98.0897.5395.2096.6297.71
Pneumonia94.5198.6397.1895.8297.25



ResNet50COVID-1998.6399.5999.1798.9099.2797.89
Normal99.1897.9496.0197.5798.35
Pneumonia95.8899.3198.5997.2198.17



SqueezeNetCOVID-1998.6399.5999.1798.9099.2798.08
Normal98.9098.4997.0497.9698.63
Pneumonia96.7099.0498.0597.3798.26
Table 2

The performance metrics of the ELM classifier.

ModelClassesSe. (%)Sp. (%)Pre. (%)F-score (%)Acc. (%)Overall Acc. (%)
AlexNetCOVID-1998.0899.3198.6298.3598.9096.61
Normal98.3597.1294.4696.3797.53
Pneumonia93.4198.4996.8795.1096.79



VGG19COVID-1999.1899.8699.7299.4599.6398.53
Normal99.1898.6397.3098.2398.81
Pneumonia97.2599.3198.6197.9398.63



GoogleNetCOVID-1996.9899.0498.0697.5198.3594.69
Normal97.5394.9290.5693.9295.79
Pneumonia89.5698.0895.8892.6195.24



ResNet50COVID-1997.5398.3596.7397.1398.0894.87
Normal97.5396.0292.4594.9296.52
Pneumonia89.5697.9495.6092.4895.15



SqueezeNetCOVID-1995.8898.0896.1496.0197.3492.86
Normal95.3394.5189.6692.4194.78
Pneumonia87.3696.7092.9890.0893.59
The performance metrics of the MV-COVIDet method. The performance metrics of the ELM classifier. Table 1, Table 2 show the performance of MV-COVIDet method and ELM classifier in terms of sensitivity, specificity, precision, F-score, accuracy, and overall accuracy, respectively. As we can see in Table 1, the MV-COVIDet method fed with deep features yielded promising results. VGG19 features achieved the highest overall accuracy score of 98.99%, while the AlexNet features achieved an overall accuracy score of 98.81% as the second best score, and the SqueezeNet features achieved an overall accuracy score of 98.08% as the third best score. The percentage of success for classifying COVID-19 cases using VGG19, AlexNet, and SqueezeNet features are 99.63%, 99.63%, and 99.27%, respectively. Table 2 shows that the overall accuracy scores achieved by VGG19, AlexNet, and SqueezeNet features are 98.53%, 96.61%, and 92.86%, respectively. It is observed that the MV-COVIDet method provides successful enhancement in detecting COVID-19, normal, and pneumonia cases than applying ELM classifier on extracted deep features. The higher performance of MV-COVIDet method is related to the nature of learned feature representation. The confusion matrix obtained using the MV-COVIDet method for the VGG19 features is shown in Fig. 2 .
Fig. 2

Confusion matrix of MV-COVIDet method for VGG19 features.

Confusion matrix of MV-COVIDet method for VGG19 features. Fig. 2 indicates the confusion between COVID-19, normal, and pneumonia cases. It can be seen that while 361 COVID-19, 362 normal, and 358 pneumonia samples are correctly classified, 3 COVID-19, 2 normal, and 6 pneumonia samples are misclassified. Hence, the MV-COVIDet method can be well used for performing detection and classification of COVID-19, normal, and pneumonia cases. The overall accuracy scores obtained for MV-COVIDet method and ELM classifier on the deep features are also visualized as a bar chart in Fig. 3 .
Fig. 3

Overall accuracy scores of MV-COVIDet and ELM on the deep features.

Overall accuracy scores of MV-COVIDet and ELM on the deep features. As observed in Fig. 3, MV-COVIDet method produced the top three best-performed results on extracted deep features from VGG19, AlexNet, and SqueezeNet models. Therefore, in the second set of experiments, the focus is on the concatenation of these types of deep features with each other to construct multi-view data. The analysis results of MV-COVIDet method for multi-view data are shown in Table 3 .
Table 3

The performance metrics of the MV-COVIDet method on multi-view data.

ModelClassesSe. (%)Sp. (%)Pre. (%)F-score (%)Acc. (%)Overall Acc. (%)
AlexNet & VGG19COVID-1998.9099.8699.7299.3199.5499.08
Normal10099.1898.3899.1899.45
Pneumonia98.3599.5999.1798.7699.18



AlexNet & SqueezeNetCOVID-1999.7399.8699.7399.7399.8299.45
Normal99.7399.5999.1899.4599.63
Pneumonia98.9099.7399.4599.1799.45



SqueezeNet & VGG19COVID-1910099.8699.7399.8699.9199.54
Normal99.7399.5999.1899.4599.63
Pneumonia98.9099.8699.7299.3199.54



AlexNet & SqueezeNet & VGG19COVID-1910010010010010099.73
Normal99.7399.7399.4599.5999.73
Pneumonia99.4599.8699.7299.5999.73
The performance metrics of the MV-COVIDet method on multi-view data. The reported metric values in Table 3 indicates that the MV-COVIDet method using multiple types of features generally performs better compared to using single feature type. This confirms the logic of combining multiple types of features. It is seen that increasing the number of types of features improves the classification performance. The MV-COVIDet method on the concatenation of AlexNet, SqueezeNet, and VGG19 features ensured satisfactory results with an overall accuracy of 99.73%. In the third set of experiments, we examined the effect of constructing multi-view data with top four feature types on the performance of the MV-COVIDet method. The analysis results of MV-COVIDet method on four-view data are shown in Table 4 .
Table 4

The performance metrics of the MV-COVIDet method on four-view data.

ModelClassesSe. (%)Sp. (%)Pre. (%)F-score (%)Acc. (%)Overall Acc. (%)
AlexNet & ResNet50 & SqueezeNet & VGG19COVID-1910010010010010099.82
Normal10099.7399.4599.7399.82
Pneumonia99.4510010099.7299.82
The performance metrics of the MV-COVIDet method on four-view data. From Table 4, it can be observed that increasing the number of types of features from 3 to 4 results in a slight performance improvement for MV-COVIDet method, thus indicating that the method is nearly saturated. Therefore, we do not use more feature types to construct multi-view data. Fig. 4 shows the confusion matrix obtained using the MV-COVIDet method for the concatenated features of AlexNet, ResNet50, SqueezeNet, and VGG19. From Fig. 4, it can be observed that only 2 pneumonia samples are misclassified and the other samples are correctly classified, which signifies the robustness of the method.
Fig. 4

Confusion matrix of MV-COVIDet method for the concatenated features of AlexNet, ResNet50, SqueezeNet, and VGG19.

Confusion matrix of MV-COVIDet method for the concatenated features of AlexNet, ResNet50, SqueezeNet, and VGG19. The metric values of applying the MV-COVIDet method and ELM classifier on the concatenation of AlexNet, ResNet50, SqueezeNet, and VGG19 features in the form of bar chart are shown in Fig. 5 . The results of ELM classifier are taken for comparison.
Fig. 5

Metric values obtained using the (a) MV-COVIDet and (b) ELM on the concatenation of AlexNet, ResNet50, SqueezeNet, and VGG19 features.

Metric values obtained using the (a) MV-COVIDet and (b) ELM on the concatenation of AlexNet, ResNet50, SqueezeNet, and VGG19 features. Fig. 5 confirms that the MV-COVIDet method outperformed the ELM classifier on the concatenation of feature types, because MV-COVIDet learns a feature representation which preserves the correlative and the complementary information in a low-dimensional discriminative feature space. The preservation of correlative and complementary information in multiple feature types is done with the help of adaptive weights. In the fourth set of experiments, the performance metrics of the MV-COVIDet method with and without adaptive weights are compared to evaluate the effectiveness of the adaptive weights. In this scenario, we used the concatenation of AlexNet, ResNet50, SqueezeNet, and VGG19 features. The experimental results are shown in Fig. 6 . It can be seen that the MV-COVIDet method with the adaptive weights performs better than the one without the adaptive weights. It demonstrates the effectiveness of the adaptive weights.
Fig. 6

Performance comparison of the MV-COVIDet method with and without adaptive weights.

Performance comparison of the MV-COVIDet method with and without adaptive weights.

Comparson with previous studies

Finally, the MV-COVIDet performance is compared with previous studies performed on X-ray images. It is important to know that a fair comparison of results is not possible due to differences in datasets, methods, and validation techniques. The comparison results are shown in Table 5 .
Table 5

Comparison of MV-COVIDet method with previous studies.

StudiesNumber of casesOverall Acc. (%)
Ismael & Sengür [11]180 COVID-19, 200 Normal94.70
Turkoglu [13]219 COVID-19, 1583 Normal, 4290 Pneumonia99.18
Khan et al. [15]284 COVID-19, 310 Normal, 657 Pneumonia95.00
Toğaçar et al. [16]295 COVID-19, 65 Normal, 98 Pneumonia99.27
Ashour et al. [18]200 COVID-19, 200 Normal98.60
Apostolopoulos & Mpesiana [19]224 COVID, 504 Normal, 714 Pneumonia94.72
Canayaz [20]364 COVID-19, 364 Normal, 364 Pneumonia99.38
MV-COVIDet364 COVID-19, 364 Normal, 364 Pneumonia99.82
Comparison of MV-COVIDet method with previous studies. It can be observed from Table 5 that the MV-COVIDet method outperforms other methods in terms of overall accuracy score, validating the importance of effectively exploiting the multiple types of deep features. Since the dataset we used in this study is the same as the dataset in the study of [20], a one-to-one comparison is performed only with this study. In [20], the image contrast enhancement algorithm (ICEA) was performed on X-ray images to provide better quality ones. As can be observed, [20] yielded an overall accuracy score of 99.38% while our method reached an overall accuracy score of 99.82% without any use of image processing technique.

Conclusion

The outbreak of COVID-19 has put a major pressure on health centers, preventing them from providing effective treatment without the risk of infection. In this study, a method called MV-COVIDet was presented for detecting COVID-19 based on X-ray images which can help medical clinicians in making appropriate decisions for diagnosis. This method leverage multiple types of deep features extracted from X-ray images to learn an efficient feature representation and feed them into the ELM classifier to classify the data into COVID-19, normal, and pneumonia cases. The MV-COVIDet method was evaluated with different types of deep features. Simulation results showed that our method achieved the highest overall accuracy score of 99.82% on the concatenated features of AlexNet, ResNet50, SqueezeNet, and VGG19. The accuracy scores of COVID-19, normal, and pneumonia classes were 100%, 99.82%, and 99.82%, respectively. The comparison of our method with the previous ones indicated its superiority in terms of overall accuracy score. The limitation of the MV-COVIDet method is that it cannot properly handle a set of X-ray and CT images. In the future, we plan to expand the image collection to include X-ray and CT images of lung diseases and assess the performance of the presented method on it. In addition, we intend to extend this method for a scenario where only a limited number of data is labeled.

CRediT authorship contribution statement

Hamidreza Hosseinzadeh: Conceptualization, Methodology, Software, Validation, Formal analysis, Writing – original draft, Writing – review & editing, Supervision.

Declaration of Competing Interest

The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
  27 in total

Review 1.  A survey on deep learning in medical image analysis.

Authors:  Geert Litjens; Thijs Kooi; Babak Ehteshami Bejnordi; Arnaud Arindra Adiyoso Setio; Francesco Ciompi; Mohsen Ghafoorian; Jeroen A W M van der Laak; Bram van Ginneken; Clara I Sánchez
Journal:  Med Image Anal       Date:  2017-07-26       Impact factor: 8.545

2.  Review on COVID-19 diagnosis models based on machine learning and deep learning approaches.

Authors:  Zaid Abdi Alkareem Alyasseri; Mohammed Azmi Al-Betar; Iyad Abu Doush; Mohammed A Awadallah; Ammar Kamal Abasi; Sharif Naser Makhadmeh; Osama Ahmad Alomari; Karrar Hameed Abdulkareem; Afzan Adam; Robertas Damasevicius; Mazin Abed Mohammed; Raed Abu Zitar
Journal:  Expert Syst       Date:  2021-07-28       Impact factor: 2.812

3.  COVIDetectioNet: COVID-19 diagnosis system based on X-ray images using features selected from pre-learned deep features ensemble.

Authors:  Muammer Turkoglu
Journal:  Appl Intell (Dordr)       Date:  2020-09-18       Impact factor: 5.019

4.  COVIDetection-Net: A tailored COVID-19 detection from chest radiography images using deep learning.

Authors:  Ahmed S Elkorany; Zeinab F Elsharkawy
Journal:  Optik (Stuttg)       Date:  2021-02-01       Impact factor: 2.443

5.  SARS-CoV, MERS-CoV and SARS-CoV-2: A Diagnostic Challenge.

Authors:  Madeshwari Ezhilan; Indhu Suresh; Noel Nesakumar
Journal:  Measurement (Lond)       Date:  2020-08-08       Impact factor: 5.131

6.  Ensemble-based Bag of Features for Automated Classification of Normal and COVID-19 CXR Images.

Authors:  Amira S Ashour; Merihan M Eissa; Maram A Wahba; Radwa A Elsawy; Hamada Fathy Elgnainy; Mohamed Saeed Tolba; Waleed S Mohamed
Journal:  Biomed Signal Process Control       Date:  2021-04-20       Impact factor: 3.880

7.  Computer-aided detection of COVID-19 from X-ray images using multi-CNN and Bayesnet classifier.

Authors:  Bejoy Abraham; Madhu S Nair
Journal:  Biocybern Biomed Eng       Date:  2020-09-02       Impact factor: 4.314

8.  CoroNet: A deep neural network for detection and diagnosis of COVID-19 from chest x-ray images.

Authors:  Asif Iqbal Khan; Junaid Latief Shah; Mohammad Mudasir Bhat
Journal:  Comput Methods Programs Biomed       Date:  2020-06-05       Impact factor: 5.428

9.  Covid-19: automatic detection from X-ray images utilizing transfer learning with convolutional neural networks.

Authors:  Ioannis D Apostolopoulos; Tzani A Mpesiana
Journal:  Phys Eng Sci Med       Date:  2020-04-03
View more
  3 in total

1.  An agent-based transmission model of COVID-19 for re-opening policy design.

Authors:  Alma Rodríguez; Erik Cuevas; Daniel Zaldivar; Bernardo Morales-Castañeda; Ram Sarkar; Essam H Houssein
Journal:  Comput Biol Med       Date:  2022-07-19       Impact factor: 6.698

2.  A Comprehensive Review of Artificial Intelligence in Prevention and Treatment of COVID-19 Pandemic.

Authors:  Haishuai Wang; Shangru Jia; Zhao Li; Yucong Duan; Guangyu Tao; Ziping Zhao
Journal:  Front Genet       Date:  2022-04-26       Impact factor: 4.772

3.  A Novel Method for COVID-19 Detection Based on DCNNs and Hierarchical Structure.

Authors:  Yuqin Li; Ke Zhang; Weili Shi; Zhengang Jiang
Journal:  Comput Math Methods Med       Date:  2022-08-31       Impact factor: 2.809

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.