| Literature DB >> 33824681 |
Abstract
Automatic diagnosis of coronavirus (COVID-19) is studied in this research. Deep learning methods especially convolutional neural networks (CNNs) have shown great success in COVID-19 diagnosis in recent works. But they are efficient when the depth of network is high enough. However, the use of a deep network requires a sufficiently large training set, which is not available in practice. From the other hand, the use of a shallow CNN may not provide superior results because it is not able to rich feature extraction due to lacking enough convolutional layers. To deal with this difficulty, the contextual features reduced by convolutional filters (CFRCF) is proposed in this work. CFRCF extracts shape and textural features as contextual feature maps from the chest X-ray radiographs and abdominal computed tomography (CT) images. Morphological operators, Gabor filter banks and attribute filters are used for contextual feature extraction. Then, two convolutional filters are applied to the contextual feature cube to extract the nonlinear sub-features and hidden relationships among the contextual features. Finally, a fully connected layer is used to produce a reduced feature vector which is fed to a classifier. Support vector machine and random forest are used as classifier. The experimental results show the superior performance of the proposed method from the recognition accuracy and running time point of view using limited training samples. More than 76% and 94% overall classification accuracy is obtained by the proposed method in CT scan and X-ray images datasets, respectively.Entities:
Keywords: CT scan; Gabor; X-ray; attribute; automatic diagnosis; convolutional neural network; coronavirus (COVID-19); morphology
Year: 2021 PMID: 33824681 PMCID: PMC8017558 DOI: 10.1016/j.bspc.2021.102602
Source DB: PubMed Journal: Biomed Signal Process Control ISSN: 1746-8094 Impact factor: 3.880
Deep learning networks for COVID-19 diagnosis using radiology images.
| Ref (year) | Image type | Model | Description |
|---|---|---|---|
| [ | X-ray | Hybrid deep learning | Visual geometry group based neural network (VGG) is combined with spatial transformer network (STN) and CNN |
| [ | CT | Multi-scale CNN (MSCNN) | MSCNN is used to learn scaled-invariant patterns based on multi-scale spatial pyramid decomposition. |
| [ | X-ray | Integrated stacked deep convolutional network | The pre-trained models such as ReNet101 and MobileNet are used to compensate for limited training data |
| [ | X-ray | Deep features and SVM | VGG19, AlexNet, ResNet and GoogleNet are used for deep feature extraction. Then, metaheuristic algorithms are used for feature selection. Finally, SVM is used for classification. |
| [ | CT | Dual-branch combination network (DCN) | DCN simultaneously achieves individual-level lesion segmentation and classification. |
| [ | X-ray | Deep learning based CNN called as nCOVnet | A CNN model with 24 layers is introduced where the VGG16 is used as the base model and five custom layers are used as the head model. |
| [ | X-ray | SqueezeNet | Deep SqueezeNet is used with Bayes optimization and a detailed augmentation. |
| [ | X-ray | Deep transfer learning | Transfer learning is used to train ResNet18, ResNet50, DenseNet-121 and SqueezeNet. |
| [ | X-ray | DarkCovidNet | DarkCovidNet inspired by DarkNet contains 17 layers with different filtering on each layer. |
| [ | X-ray | CVDNet, a deep CNN | CVDNet is based on residual neural network constructed by two parallel levels to capture local and global features. |
| [ | X-ray | A Siamese neural network | Contrastive learning is integrated with a pre-trained ConvNet encoder to achieve unbiased feature representation. A Siamese network is learned for final classification. |
| [ | X-ray | ResNet based deep learning | The framework is composed of two deep learning models. The first model is for discrimination of COVID-19 from other infections. The second model is for localization that assigns the recognized X-ray into left lung, right lung or bipulmonary. |
| [ | X-ray | Convolutional CapsNet | Convolutional CapsNet uses the capsule networks for binary and multi-class classification. |
| [ | X-ray | CNN with gravitational search optimization (GSA) | The DenseNet121 is used as the considered CNN architecture where hyperparameters are set by GSA. |
| [ | CT | CNN based transfer learning-Bidirectional ling short-term memory (BiLSTM) | A hybrid structure containing AlexNet architecture and transfer learning is proposed. The BiLSTM is also used to take into account the temporal properties. |
| [ | X-ray | Combined CNN-LSTM | CNN is used for feature extraction and LSTM is used for detection. |
| [ | X-ray | Concatenation of Xception and ResNet50V2 | Multiple features are extracted by two robust networks: Xception and ResNet50V2 |
| [ | X-ray | COVIDX-Net | COVIDX-Net assesses seven different deep networks including VGG19 and the second version of Google MobileNet. |
| [ | X-ray | DWT + CNN | The discrete wavelet transform (DWT) and CNN are used for feature extraction, minimum redundancy and maximum relevance (mRMR) is used for feature selection and random forest-based bagging approach is used for classification. |
| [ | X-ray, CT | CMT-CNN | Contrastive multi-task CNN (CMT-CNN) encourages local aggregation with a contrastive loss. |
| [ | X-ray | CNN based models: a comprehensive study | Eight pre-trained CNN models such as VGG16, AlexNet and GoogleNet are assessed. |
| [ | CT | CCSHNet | CCSHNet uses a novel transfer learning and determines the best two pre-trained models. It uses a discriminant correlation analysis based fusion method. |
Fig. 1General block diagram of the proposed CFRCF method.
Fig. 2Suggested structure of CNN in CFRCF.
Fig. 3Extracted contextual feature maps for CT dataset when MP is used as input of the convolutional filters.
Fig. 4Extracted contextual feature maps for CT dataset when Gabor feature cube is used as input of the convolutional filters.
Fig. 5Extracted contextual feature maps for CT dataset when EMAP is used as input of the convolutional filters.
Fig. 6Extracted contextual feature maps for X-ray dataset when MP is used as input of the convolutional filters.
Fig. 7Extracted contextual feature maps for X-ray dataset when Gabor feature cube is used as input of the convolutional filters.
Fig. 8Extracted contextual feature maps for X-ray dataset when EMAP is used as input of the convolutional filters.
The number of samples in each class of datasets.
| Dataset | Class | No. of samples | No. of augmented samples |
|---|---|---|---|
| CT | COVID | 349 | --- |
| NanCOVID | 397 | --- | |
| X-ray | COVID | 187 | 374 |
| NanCOVID | 73 | 146 |
Comparison results for CT image dataset.
Comparison results for X-ray image dataset.
Comparison between the proposed model with some recent works for binary classification of COVID-19 (COVID-19 vs. Nan−COVID-19) using X-ray images.
| Reference | Method | Number of X-ray images (C: COVID, N:Nan-COVID) | Accuracy |
|---|---|---|---|
| [ | nCOVnet | C: 42 and N: 42 | 88.10 |
| [ | COVIDX-Net | C: 25 and N: 25 | InceptionV3: 50 MobileNetV2: 60 ResNetV2: 70 InceptionResNetV2: 80 |
| Xception: 80 DenseNet201: 90 VGG19: 90 | |||
| [ | DWT + CNN | C: 237 and N: 1206 | Xception: 0.9046 Inception: 0.9084 DenseNet201: 0.9241 MobileNet v2: 0.9413 |
| VGG19: 0.9627 ResNet50: 0.9945 | |||
| [ | CMT-CNN | C: 231 and N: 4007 + 1583 | VGG-19: 93.42 ResNet-50: 95.66 EfficientNet: 97.23 |
| Proposed | CFRCF | C: 187 and N: 73 (Augmented: C: 374 and N: 146) | 94.23 |
Comparison among original features and CFRCF features.
| Dataset | features | Number of features | Mean of variance | Mean of correlation |
|---|---|---|---|---|
| CT | MP | 150 | 0.10 | 0.51 |
| MP (CFRCF) | 100.00 | 29.75 | 0.01 | |
| Gabor | 150 | 3.03 | 0.58 | |
| Gabor (CFRCF) | 100.00 | 51.58 | 0.04 | |
| EMAP | 150 | 0.14 | 0.79 | |
| EMAP (CFRCF) | 100.00 | 27.06 | 0.01 | |
| X-ray | MP | 150 | 0.04 | 0.46 |
| MP (CFRCF) | 100.00 | 49.59 | 0.00 | |
| Gabor | 150 | 0.31 | 0.39 | |
| Gabor (CFRCF) | 100.00 | 48.02 | 0.01 | |
| EMAP | 150 | 0.08 | 0.15 | |
| EMAP (CFRCF) | 100.00 | 28.90 | 0.00 |
Fig. 9Correlation among the original MP features (100, 1000 and 10,000 features) and the correlation among 100 MP features extracted by CFRCF (the right bottom subplot) in CT dataset.
Fig. 10Correlation among the original MP features (100, 1000 and 10,000 features) and the correlation among 100 MP features extracted by CFRCF (the right bottom subplot) in X-ray dataset.