Literature DB >> 35219186

Attention-based 3D CNN with residual connections for efficient ECG-based COVID-19 detection.

Nebras Sobahi1, Abdulkadir Sengur2, Ru-San Tan3, U Rajendra Acharya4.   

Abstract

BACKGROUND: The world has been suffering from the COVID-19 pandemic since 2019. More than 5 million people have died. Pneumonia is caused by the COVID-19 virus, which can be diagnosed using chest X-ray and computed tomography (CT) scans. COVID-19 also causes clinical and subclinical cardiovascular injury that may be detected on electrocardiography (ECG), which is easily accessible.
METHOD: For ECG-based COVID-19 detection, we developed a novel attention-based 3D convolutional neural network (CNN) model with residual connections (RC). In this paper, the deep learning (DL) approach was developed using 12-lead ECG printouts obtained from 250 normal subjects, 250 patients with COVID-19 and 250 with abnormal heartbeat. For binary classification, the COVID-19 and normal classes were considered; and for multiclass classification, all classes. The ECGs were preprocessed into standard ECG lead segments that were channeled into 12-dimensional volumes as input to the network model. Our developed model comprised of 19 layers with three 3D convolutional, three batch normalization, three rectified linear unit, two dropouts, two additional (for residual connections), one attention, and one fully connected layer. The RC were used to improve gradient flow through the developed network, and attention layer, to connect the second residual connection to the fully connected layer through the batch normalization layer.
RESULTS: A publicly available dataset was used in this work. We obtained average accuracies of 99.0% and 92.0% for binary and multiclass classifications, respectively, using ten-fold cross-validation. Our proposed model is ready to be tested with a huge ECG database.
Copyright © 2022 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  3D CNN; Attention mechanism; COVID-19 detection; ECG; Residual connections

Year:  2022        PMID: 35219186      PMCID: PMC8858432          DOI: 10.1016/j.compbiomed.2022.105335

Source DB:  PubMed          Journal:  Comput Biol Med        ISSN: 0010-4825            Impact factor:   4.589


Introduction

Since 2019, the world has been afflicted by the COVID-19 pandemic [1]. More than 5 million people have died as a result of the virus [2], which can cause complications in the lungs and other organs. COVID-19 pneumonia may be diagnosed on chest X-rays and CT scans, while echocardiography and electrocardiography (ECG) may unveil clinical or subclinical cardiac involvement. Several studies using artificial intelligence (AI) for COVID-19 detection via chest X-rays and CT scans have been published in the literature. Togaçar et al. [3] presented a DL approach for diagnosing COVID-19 in which chest X-ray images were preprocessed with fuzzy color, and features collected using MobileNet2 and SqueezeNet were fed to support vector machine (SVM) for classification. The authors attained 99.72% accuracy. For COVID-19 detection on chest X-ray images, Ismael et al. [4] compared machine learning (various texture features extracted from chest X-ray images were classified with SVM classifiers) and DL (end-to-end learning and transfer learning) approaches, finding that deep features and SVM classifier yielded 94.7%, and binarized statistical image features (BSIF) features and SVM classifier yielded 90.5% accuracy. Toraman et al. [5] used convolutional CapsNet to determine the COVID-19 on chest X-ray images. The method was fast and accurate, yielding 97.24% and 84.22% accuracy rates for binary and multiclass classifications, respectively. To diagnose COVID-19, Ozturk et al. [6] used chest X-ray images as input for a DL network. Using a real-time classifier in their DarkCovidNet model, 98.08% and 87.02% accuracy rates for binary and multiclass classifications, respectively, were attained. Ismael et al. [7] decomposed chest X-ray images with various multiresolution approaches (Shearlet, Wavelet, and Contourlet transform) and utilized the entropy and normalized energy features for detection of COVID-19.99.29% accuracy was attained by feeding Shearlet features to the well-known extreme learning machines (ELM) classifier. Karakanis et al. [8] developed a model that constructed synthetic images for increasing the number of chest X-ray images for efficient COVID-19 detection. The obtained synthetic images were used to train the model. Two DL models lightweight and ResNet8 models were used for binary and multiclass classification tasks, respectively, which attained 98.7% and 98.3% accuracy rates, respectively. In addition to the respiratory system, COVID-19 infection may affect the cardiovascular system [9]. Several cardiovascular abnormalities can be seen in COVID-19 patients [10,11] that may manifest as cardiac arrhythmia, conduction problems, ECG abnormalities, myocarditis, and pericarditis [12]. ST changes on ECG offer clues to COVID-19 diagnosis that may denote subclinical or clinical cardiovascular injury [13]. Indeed, there has been a burgeoning interest in detecting COVID-19 on ECG with and without incorporating AI. Ozdemir et al. [14] applied hexaxial feature mapping to 12-lead ECG recordings, and attained 96.2% accuracy for COVID-19 detection. Wang et al. [15] used univariate and multivariate regression models to correlate serum indexes as well as ST-T changes and atrial fibrillation on ECG with COVID-19 positive status, and found ECG ST-T changes to be significantly correlated with COVID-19 (p < 0.001). Pavri et al. [16] studied the ECGs of 75 admitted COVID-19 patients acquired before and during the index hospitalization. In 49.3%, they found paradoxical PR interval prolongation or lack of physiological PR shortening with increased heart rate, which connoted impaired atrioventricular conduction. This was associated with increased mortality with the need for mechanical ventilation. Angeli et al. [17] studied the ECGs of 50 COVID-19 and reported that 30% had ST-T abnormalities and 30% left ventricular hypertrophy. In the course of the hospital stay, anomalies such as atrial fibrillation, tachybrady syndrome, and acute pericarditis were observed. Li et al. [18] analyzed the ECGs of 113 COVID-19 patients, 50 of whom had died. The presence of ventricular arrhythmia and sinus tachycardia were independent risk factors for dying from COVID-19. The ECG is cheap, widely accessible, and holds promise as a screening tool for COVID-19 diagnosis and prognostication. Of the works surveyed above, many still relied on expert interpretation and have not exploited AI for automated diagnosis. In this work, we have developed an AI ECG-based COVID-19 diagnostic model that employed a novel approach based on a shallow but efficient attention-based 3D CNN model with RC. Segmented 12-lead ECG images were channeled into 12-dimensional volumes and input to the model, which comprised 19 layers and was trained in an end-to-end learning model. We also separately explored using various texture image analysis methods to extract features that we then fed to standard classifiers. The results of these various approaches were compared with our novel model. In our experiments, we used 12-lead ECG printouts collected from normal and COVID-19 cases that were preprocessed to trim non-informative parts of the images and section the image into the standard I, II, III, V1, V2, V3, V4, V5, V6, aVL, aVR, and aVF segments. Each segment represented an image of one of the 12 standard ECG leads in the subject, and all 12 segments were analyzed as a unit. The performance metrics for diagnosis of COVID-19 were reported as accuracy, sensitivity, specificity, and F1 scores. It is important to note that while we have retained the ability to discern the discriminative utility of individual ECG leads for COVID-19 diagnosis, the spatial relationships among the ECG lead positions were not preserved in the model. The rest of the paper is organized as follows. In Section 2, the proposed approach is introduced, which includes subsections on the dataset, preprocessing, and attention mechanism. Experimental works and results are detailed in Section 3; discussions in Section 4; and conclusions in Section 5.

Proposed approach

The work is divided into three stages: dataset collection; data preprocessing and data segmentation; and proposed model (Fig. 1 ). Details are given in the following subsections.
Fig. 1

Workflow of the proposed approach.

Workflow of the proposed approach.

Dataset

In this study, a publicly available dataset was used [19] that comprised ECG images from 1937 unique subjects: 250 COVID-19, 859 normal, 77 myocardial infarction, 203 prior myocardial infarction, 548 abnormal heartbeat. For our primary aim to detect COVID-19 from normal, we studied the 250 COVID-19 and only 250 out of 859 normal ECG images to avoid dataset imbalance. For the secondary aim of multiclass classification, we used the above plus 250 ECG images from the abnormal heartbeat group, which was the only group among the remaining three with more than 250 patients within the class. The ECG data had been acquired using an EDAN SE-3 series 3-channel ECG with the sampling frequency of 500 Hz, and processed using either 0.67–25 Hz band-pass filter (BPF) or 0.5–100 Hz BPF with a 50 Hz notch filter. ECG printouts of the 12 standard ECG leads (Fig. 2 ) were processed for analysis. These images were fed as input to the developed model. For binary classification, ECG images from the COVID-19 and normal groups were used; and for multiclass classification, ECG images from the abnormal heartbeat, COVID-19, and normal groups.
Fig. 2

Typical standard 12-lead ECG signals of a normal subject.

Typical standard 12-lead ECG signals of a normal subject.

Preprocessing

The ECG printouts were filtered to eliminate noise and segmented to isolate individual regions of the 12-lead ECG image, so that each segment could be depicted as a black curve contrasted on a white background. Background gridlines and binary noises on the ECG printouts were removed by applying thresholding operations based on the green (G) channel of the ECG images [14] and morphological operations [14], respectively. Due to the different spatial resolutions of ECG printouts among the subjects, rectangular non-overlapping windows of different sizes selected based on the dimensions of the region of interest of the individual ECG lead on the printout were used in the segmentation: 112 × 112, 189 × 189, and 315 × 315 for COVID-19 subjects; 112 × 112 for normal subjects [14]. These had to be manually located and placed on the 12-lead ECG printout image (Fig. 3 ). All ECG segments were resized to 100 × 100 for downstream input to the network. Fig. 4 shows 100 × 100 samples of ECG images for I, II, II and V1 leads for normal and COVID-19 cases.
Fig. 3

ECG leads at regions of interest were manually segmented using non-overlapping blocks of different sizes.

Fig. 4

Typical ECG images (100 × 100samples) of normal and COVID-19 classes.

ECG leads at regions of interest were manually segmented using non-overlapping blocks of different sizes. Typical ECG images (100 × 100samples) of normal and COVID-19 classes.

Attention-based 3D CNN with RC model

The attention-based 3D CNN with RC model is a shallow network comprising of 19 layers: image 3D input, three 3D convolution layers, three batch normalization layers, and three rectified linear unit (ReLu) layers, two dropout layers, two additional layers, one Sigmoid layer, and one Elementwise Multiplication layer, as well as the fully connected, softmax, and classification layers located after the last ReLu layer (Fig. 5 ).
Fig. 5

The architecture of the proposed attention-based 3D CNN with RC model.

The architecture of the proposed attention-based 3D CNN with RC model. There were two RC. The first one connected ReLu1 layer output to the second batch normalization layer, i.e., outputs of the second 3D convolution and ReLu1 layers were added element-wise. Similarly, the second element-wise addition was applied to the outputs of the third 3D convolution and second ReLu layers to perform the second residual connection. The attention mechanism was used to reflect the input element's relevance weight so that the relevant element could make a significant contribution to the merged output. Following the second residual connection came the attention layer, which included the Sigmoid and element-wise multiplication layers. In Fig. 6 , the attention mechanism is depicted. The ith layer's output feature map is , and the focus region for each pixel is determined by , a gating vector obtained from a coarser scale [20]. The output was obtained via Elementwise Multiplication, as given in equation (1);
Fig. 6

Illustration of the attention layer composed of sigmoid and Elementwise Multiplication layers.

Illustration of the attention layer composed of sigmoid and Elementwise Multiplication layers. The was obtained from Eq. (2);where and symbols the bias terms and and are the linear transformations obtained by using the 1 × 1 × 1 dimensional convolution. The weights of the attention module were chosen at random and fine-tuned during the deep architecture's end-to-end training. The main branch's convolutional units were bypassed by the RC. The remaining connections and convolutional units' outputs were added element by element. The remaining connections should also have 1-by-1 convolutional layers as the size of the activations varied. Parameter gradients could freely flow from the output layer to the network's prior layers with RC, enabling the training of deeper networks. The stochastic gradient descent with momentum (SGDM) optimizer was used to train the end-to-end CNN model. Table 1 shows detailed descriptions of the proposed attention-based 3D CNN model with RC architecture's layers, activations, and learnable weights.
Table 1

Detailed overview of the proposed deep network model.

NameTypeActivationsLearnable
image_3D_input3-D Image Input100 × 100 × 12 × 1
conv3d13D Convolution98 × 98 × 10 × 8Weights 5 × 5 × 5 × 1 × 8Bias 1 × 1 × 1 × 1 × 8
batchnorm_1Batch Normalization98 × 98 × 10 × 8Offset 5 × 5 × 5 × 1 × 8Scale 1 × 1 × 1 × 1 × 8
relu1ReLu98 × 98 × 10 × 8
drop1Dropout98 × 98 × 10 × 8
conv3d23D Convolution98 × 98 × 10 × 8Weights 5 × 5 × 5 × 1 × 8Bias 1 × 1 × 1 × 1 × 8
add11Addition98 × 98 × 10 × 8
batchnorm_2Batch Normalization98 × 98 × 10 × 8Offset 5 × 5 × 5 × 1 × 8Scale 1 × 1 × 1 × 1 × 8
relu2ReLu98 × 98 × 10 × 8
drop2Dropout98 × 98 × 10 × 8
conv3d33D Convolution98 × 98 × 10 × 8Weights 5 × 5 × 5 × 1 × 8Bias 1 × 1 × 1 × 1 × 8
add12Addition98 × 98 × 10 × 8
sigmoid1_1Sigmoid Layer98 × 98 × 10 × 8
elemMux_1Elementwise Multiplication98 × 98 × 10 × 8
batchnorm_3Batch Normalization98 × 98 × 10 × 8Offset 5 × 5 × 5 × 1 × 8Scale 1 × 1 × 1 × 1 × 8
relu3ReLu98 × 98 × 10 × 8
fc1Fully Connected1 × 1 × 1 × 2Weights 2 × 768320Bias 2 × 1
softmaxSoftmax1 × 1 × 1 × 2
classoutputClassification Output1 × 1 × 1 × 2
Detailed overview of the proposed deep network model.

Experimental works and results

The proposed attention-based 3D CNN model with RC was implemented in a MATLAB environment. 'MiniBatch' size and several 'MaxEpoches' were selected as 12 and 50, respectively. During training, the initial learning rate was set to 0.001 and SGDM optimization was used. The experiments were conducted based on 10-fold cross-validation criteria. As we had used the limited number of samples in the dataset, no separate validation dataset was considered. The proposed method was evaluated using standard performance metrics: accuracy, sensitivity, specificity, and F1 score [[21], [22]]. The training progress of the developed deep network is shown in Fig. 7 . Training and the test accuracy scores were around 60% at the beginning of the training. After the 10th iteration, accuracy scores for both training and test datasets increased to 100%, which were maintained till the end of the training process. The loss value for the training dataset was initially increased at over 7 but subsequently decreased to around 1 and kept decreasing after the 10th iteration. The loss value for the test dataset dropped to around 0 value by the 10th iteration, which persisted till the end of the training process.
Fig. 7

Training and test progress of the developed deep network model.

Training and test progress of the developed deep network model. Table 2 summarizes the performance metrics at each fold of 10-fold cross-validation. 100% performance metrics were obtained for Fold1, Fold6, Fold7, Fold9, and Fold10. Average accuracy of 99.0%, the sensitivity of 99.6%, specificity of 98.4%, and F1 score of 99.01% were obtained using a 10-fold cross-validation strategy.
Table 2

Performance metrics of the proposed method with 10-fold cross-validation.

Fold1Fold2Fold3Fold4Fold5Fold6Fold7Fold8Fold9Fold10Average ± SD
Accuracy(%)100989898981001009810010099.0 ± 1.05
Sensitivity(%)1001009610010010010010010010099.6 ± 1.26
Specificity(%)1009610096961001009610010098.4 ± 2.06
F1-score(%)10098.0497.9698.0498.0410010098.0410010099.01 ± 1.04
Performance metrics of the proposed method with 10-fold cross-validation. The cumulative confusion matrix is given in Fig. 8 , which shows only one COVID-19 and four normal samples were misclassified.
Fig. 8

The obtained cumulative confusion matrix for the proposed method.

The obtained cumulative confusion matrix for the proposed method. To overcome reservations regarding training a deep model on relatively small datasets, we employed two strategies to increase the size of training dataset: use of imbalanced dataset and data augmentation. In the first set of experiments, we applied the proposed method on an imbalanced dataset comprising 250 COVID-19 and all 859 normal ECG images. With 10-fold cross-validation, average accuracy, sensitivity, specificity, and F1-scores were 98.92%, 99.31%, 99.10%, and 99.19%, respectively (Table 3 ).
Table 3

Performance metrics of the proposed method on the imbalanced dataset with 10-fold cross-validation strategy.

Fold1Fold2Fold3Fold4Fold5Fold6Fold7Fold8Fold9Fold10Average ± SD
Accuracy(%)99.1099.1010010093.6910010010097.3010098.92 ± 0.0203
Sensitivity(%)96.5596.5510010010010010010010010099.31 ± 1.4546
Specificity(%)10010010010093.6910010010097.3010099.10 ± 2.0813
F1-score(%)98.2598.2510010096.7410010010098.6310099.19 ± 1.1557
Performance metrics of the proposed method on the imbalanced dataset with 10-fold cross-validation strategy. In the second set of experiments, we used variational autoencoder (VAE) [25] to augment the number of ECG images. VAE was chosen instead of generative adversarial network (GAN) [26] due to the low number of training samples. For each ECG sample image (250 COVID-19 and 250 normal), four new images were generated (1000 COVID-19 and 1000 normal). The experiments were conducted on 1250 COVID-19 and 1250 normal ECG images using 10- fold cross-validation, which yielded excellent performance (Table 4 ).
Table 4

Performance metrics of the proposed method on the augmented dataset with 10-fold cross-validation strategy.

Fold1Fold2Fold3Fold4Fold5Fold6Fold7Fold8Fold9Fold10Average ± SD
Accuracy(%)10010098.8010010099.2010099.6010010099.76 ± 0.43
Sensitivity(%)10010099.2010010010010010010010099.92 ± 0.25
Specificity(%)10010098.4010010098.2010099.2010010099.50 ± 0.72
F1-score(%)10010098.8010010099.2110099.6010010099.76 ± 0.43
Performance metrics of the proposed method on the augmented dataset with 10-fold cross-validation strategy. We performed an additional sensitivity analysis using a subject-based approach, i.e., the ECG image of one subject was first used for testing and the rest for training. As the dataset contained 500 samples (250 normal and 250 COVID-19 cases), the code was run 500 times to derive the average performance metrics. The average accuracy, sensitivity, and specificity were 97.4%, 98.0%, and 96.8%, respectively. Fig. 9 shows the confusion matrix for this subject-based analysis, which shows only 8 and 5 samples from the normal and COVID-19 classes, respectively, were misclassified. These results support the robustness of our salutary results obtained using standard 10-fold cross-validation.
Fig. 9

Confusion matrix obtained using subject-based validation.

Confusion matrix obtained using subject-based validation. A multiclass scenario was also considered in this study, where 250 ECG images from each of the normal, COVID-19, and abnormal heartbeat groups were used in the experiments with 10-fold cross-validation criteria. In the multiclass scenario, the number of samples for each class was set at 250 to eliminate class imbalance problem. Average accuracy, sensitivity, specificity, and F1-score values were 92.00%, 82.00%, 95.99%, and 92.03%, respectively (Table 5 ).
Table 5

Performance metrics obtained for the proposed method using an imbalanced dataset with a 10-fold cross-validation strategy.

Fold1Fold2Fold3Fold4Fold5Fold6Fold7Fold8Fold9Fold10Average ± SD
Accuracy(%)94.6790.6794.6790.6789.3393.3386.6796.0093.3390.6792.00 ± 2.88
Sensitivity(%)94.6790.6794.6790.6789.3393.3386.6796.0093.3390.6792.00 ± 2.88
Specificity(%)97.3395.3397.3395.3394.6796.6793.3398.0096.6795.3395.99 ± 1.44
F1-score(%)94.6990.7394.6890.7389.3893.3386.7196.0093.3390.7392.03 ± 2.86
Performance metrics obtained for the proposed method using an imbalanced dataset with a 10-fold cross-validation strategy. Average accuracy obtained using various 3D CNN model combinations with a ten-fold cross-validation strategy.

Discussions

In this paper, we proposed a novel attention-based 3D CNN model with RC for ECG-based COVID-19 diagnosis. While we could have used 1D 12-lead ECG signals in our model, we chose to convert the 12-lead ECG image into 12 separate ECG segments based on standard ECG leads that were channeled into 12-dimensional volumes as input to the model for analysis. By so doing, we have retained the ability to discern the discriminative utility of individual ECG leads for COVID-19 diagnosis although spatial relationships among the ECG lead positions, which may be relevant for manual interpretation by experts, were not preserved in the model. In an end-to-end DL model architecture, feature extraction (and where applicable, concatenation and selection operations), as well as classification, need to be fully embedded within the model. Our multichannel ECG data may increase the complexity of the classification system. To circumvent this, we opted to construct 4D data, where 12-lead ECG image segments were located into the channels of the data, and feature extraction and classification are carried out via a compact attention-based 3D CNN architecture with RC. Regarding the computational demands of our model, the training and test time for one fold was 9 min and 11 s (Fig. 6) based on an input volume of 100 × 100 × 12 × 1 and running the codes on a M4000 GPU 8 GB RAM computer. If the size of the ECG segments was increased to 128 × 128, the system would give an ‘out of memory error. Hence, we had resized all ECG lead segments to 100 × 100. The proposed 3D CNN architecture would require high-performance computers for high dimensional input volumes. As our model has a novel architecture, we compared the performance with combinations of its deconstructed elements, i.e., simple 3D CNN, attention-based 3D CNN, and 3D CNN with RC, respectively (Fig. 10 ).
Fig. 10

Various 3D CNN models used in this work: (a) Simple 3D CNN model, (b) Attention-based 3D CNN, and (c) 3D CNN with RC.

Various 3D CNN models used in this work: (a) Simple 3D CNN model, (b) Attention-based 3D CNN, and (c) 3D CNN with RC. The performance of our model was compared with the aforementioned combinations using 10-fold cross-validation. The average accuracy rates obtained using different CNN 3D CNN models are summarized in Table 4 and Fig. 11 . Fig. 11 shows the summary of accuracy scores obtained using 3D CNN models in this work.
Fig. 11

Summary of accuracies obtained (Table 6) using 3D CNN models in this work.

Summary of accuracies obtained (Table 6) using 3D CNN models in this work.
Table 6

Average accuracy obtained using various 3D CNN model combinations with a ten-fold cross-validation strategy.

MethodAverage accuracy (%)
3D CNN94.0
Attentions based 3D CNN96.0
3D CNN with RC98.0
Attention-based 3D CNN with RC99.0
It can be noted from the table that, our attention-based 3D CNN model with RC has yielded the highest accuracy. Fig. 12 shows the confusion matrices obtained over 10-fold cross-validation for various 3D CNN models. 30, 20, 10, and 5 samples were misclassified using the simple 3D CNN, attention-based 3D CNN, 3D CNN with RC, and our proposed attention-based 3D CNN with RC, respectively.
Fig. 12

Confusion matrices obtained for various models: (a) Simple 3D CNN model, (b) Attention-based 3D CNN, (c) 3D CNN with RC, and (d) Attention-based 3D CNN with RC.

Confusion matrices obtained for various models: (a) Simple 3D CNN model, (b) Attention-based 3D CNN, (c) 3D CNN with RC, and (d) Attention-based 3D CNN with RC. Also, in Table 7 , we compared the performance of our proposed method with recently published hexaxial mapping method [14] and Attallah et al. [23]. In Ref. [14], authors have used hexaxial mapping and gray level co-occurrence matrix features to form color images from the ECG leads. These images coupled with the AlexNet model and their developed CNN model attained an average accuracy of 93.6% and 96.2%, respectively with 5-fold cross-validation. In Ref. [23], the authors used five different deep learning models of distinct structural design called ECG-BioNet. Two levels of feature extraction from the different layers of each deep learning model were carried out. The features that were extracted in the higher layers of the deep models were fused using discrete wavelet transform (DWT), after which they were integrated with lower layers’ features for an effective representation of the inputs. Furthermore, [24] studied six deep learning algorithms for diagnosing COVID-19 in binary and multiclass problems. A comparison of our proposed method with state-of-the-art method using the same database for binary and multiclass classification is shown in Table 7. Of note, while we used ECG images from the abnormal heartbeat class as the third group for multiclass problem, the third group in in Refs. [23,24] were composed of patients from three group: abnormal heartbeat, myocardial infarct, and prior myocardial infarct (the exact proportions were not mentioned in the manuscripts). Hence, while our method's 92% accuracy may not be directly comparable with the slightly lower values in Refs. [23,24].
Table 7

Comparison of our proposed method with the state-of-the-art method using the same database.

MethodCross-validationNumber of SamplesAccuracy(%)
Binary classification
AlexNet architecture using hexaxial mapping [14]5-Fold cross-validationNormal = 250COVID-19 = 25093.6
Hexaxial mapping with CNN [14]5-Fold cross-validationNormal = 250COVID-19 = 25096.2
Attallah et al. [23]10-Fold cross-validationNormal = 250COVID-19 = 25098.6
Rahman et al. [24]10-Fold cross-validationNormal = 250COVID-19 = 25098.6
Proposed method10-Fold cross-validationNormal = 250COVID-19 = 25099.0
Proposed methodSubject-basedNormal = 250COVID-19 = 25097.4
Multiclass classification
Attallah et al. [23]10-Fold cross-validationNormal = 250COVID-19 = 250Cardiac = 25091.7
Rahman et al. [24]10-Fold cross-validationNormal = 250COVID-19 = 250Cardiac = 25090.8
Proposed10-Fold cross-validationNormal = 250COVID-19 = 250Abnormal heartbeat = 25092.0
Comparison of our proposed method with the state-of-the-art method using the same database. The main salient features of our developed model are given below: A novel deep model based on 3D CNN, attention, and RC with the shallow network with two RC and one attention layer has been developed using 12-lead ECG images for COVID-19 detection. Our proposed model obtained the highest classification accuracy of 99% with ten-fold cross-validation. The method concatenated the 12-lead ECG image into a volume, thereby enabling the model to efficiently learn all features for the discrimination of COVID-19 versus normal samples. In other words, an ensemble mechanism was effective using the volume data. The limitation of our work is summarized below: The complexity of the proposed model was increased with an increase in the size of the input images. The limited number of samples in the dataset precluded tuning of the parameters of the proposed method based on the validation set.

Conclusions

In this study, a new approach attention-based 3D CNN with RC was proposed for COVID-19 detection using ECG printouts. The proposed architecture is a shallow network that neither demands prolonged training duration nor large number of training samples. The model yielded superior average accuracy rates of 99.0% and 92.0% for binary and multiclass classifications, respectively, compared with published DL approaches (Table 7). The main limitation of this work is the manual placement of windows to isolate the region of interest, which requires technical experience and the use of a small database to develop the model. In the future, we plan to automate the placement of windows to isolate the region of interest and use the huge diverse database to develop the model. Besides, more deep learning approaches will be investigated to incorporate for the proposed COVID-19 detection task [[27], [28], [29]].

Declaration of competing interest

None declared.
  22 in total

1.  Classification of heart sound signals using a novel deep WaveNet model.

Authors:  Shu Lih Oh; V Jahmunah; Chui Ping Ooi; Ru-San Tan; Edward J Ciaccio; Toshitaka Yamakawa; Masayuki Tanabe; Makiko Kobayashi; U Rajendra Acharya
Journal:  Comput Methods Programs Biomed       Date:  2020-06-12       Impact factor: 5.428

2.  Comprehensive electrocardiographic diagnosis based on deep learning.

Authors:  Oh Shu Lih; V Jahmunah; Tan Ru San; Edward J Ciaccio; Toshitaka Yamakawa; Masayuki Tanabe; Makiko Kobayashi; Oliver Faust; U Rajendra Acharya
Journal:  Artif Intell Med       Date:  2020-01-20       Impact factor: 5.326

3.  COVID-19 detection using deep learning models to exploit Social Mimic Optimization and structured chest X-ray images using fuzzy color and stacking approaches.

Authors:  Mesut Toğaçar; Burhan Ergen; Zafer Cömert
Journal:  Comput Biol Med       Date:  2020-05-06       Impact factor: 4.589

4.  Automated detection of COVID-19 cases using deep neural networks with X-ray images.

Authors:  Tulin Ozturk; Muhammed Talo; Eylul Azra Yildirim; Ulas Baran Baloglu; Ozal Yildirim; U Rajendra Acharya
Journal:  Comput Biol Med       Date:  2020-04-28       Impact factor: 4.589

5.  Cardiovascular sequalae in uncomplicated COVID-19 survivors.

Authors:  Mi Zhou; Chun-Ka Wong; Ka-Chun Un; Yuk-Ming Lau; Jeffrey Chun-Yin Lee; Frankie Chor-Cheung Tam; Yee-Man Lau; Wing-Hon Lai; Anthony Raymond Tam; Yat-Yin Lam; Polly Pang; Teresa Tong; Milky Tang; Hung-Fat Tse; Deborah Ho; Ming-Yen Ng; Esther W Chan; Ian C K Wong; Chu-Pak Lau; Ivan Fan-Ngai Hung; Chung-Wah Siu
Journal:  PLoS One       Date:  2021-02-11       Impact factor: 3.240

6.  Behavior of the PR interval with increasing heart rate in patients with COVID-19.

Authors:  Behzad B Pavri; Juergen Kloo; Darius Farzad; Joshua M Riley
Journal:  Heart Rhythm       Date:  2020-06-11       Impact factor: 6.343

Review 7.  Value of electrocardiography in coronavirus disease 2019 (COVID-19).

Authors:  Sohaib Haseeb; Enes Elvin Gul; Göksel Çinier; George Bazoukis; Jesus Alvarez-Garcia; Sebastian Garcia-Zamora; Sharen Lee; Cynthia Yeung; Tong Liu; Gary Tse; Adrian Baranchuk
Journal:  J Electrocardiol       Date:  2020-08-06       Impact factor: 1.438

View more
  1 in total

1.  Genetic algorithm based image reconstruction applying the digital holography process with the Discrete Orthonormal Stockwell Transform technique for diagnosis of COVID-19.

Authors:  Gülhan Ustabaş Kaya; Tuğba Özge Onur
Journal:  Comput Biol Med       Date:  2022-08-02       Impact factor: 6.698

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.