Literature DB >> 33564873

Deep learning and the electrocardiogram: review of the current state-of-the-art.

Sulaiman Somani1, Adam J Russak1,2, Felix Richter1, Shan Zhao1,3, Akhil Vaid1, Fayzan Chaudhry1,4, Jessica K De Freitas1,4, Nidhi Naik1, Riccardio Miotto1,4, Girish N Nadkarni1,2,5, Jagat Narula6,7, Edgar Argulian6,7, Benjamin S Glicksberg1,4.   

Abstract

In the recent decade, deep learning, a subset of artificial intelligence and machine learning, has been used to identify patterns in big healthcare datasets for disease phenotyping, event predictions, and complex decision making. Public datasets for electrocardiograms (ECGs) have existed since the 1980s and have been used for very specific tasks in cardiology, such as arrhythmia, ischemia, and cardiomyopathy detection. Recently, private institutions have begun curating large ECG databases that are orders of magnitude larger than the public databases for ingestion by deep learning models. These efforts have demonstrated not only improved performance and generalizability in these aforementioned tasks but also application to novel clinical scenarios. This review focuses on orienting the clinician towards fundamental tenets of deep learning, state-of-the-art prior to its use for ECG analysis, and current applications of deep learning on ECGs, as well as their limitations and future areas of improvement.
© The Author(s) 2021. Published by Oxford University Press on behalf of the European Society of Cardiology.

Entities:  

Keywords:  Artificial intelligence; Big data; Cardiovascular medicine; Electrocardiogram;  Deep learning

Year:  2021        PMID: 33564873      PMCID: PMC8350862          DOI: 10.1093/europace/euaa377

Source DB:  PubMed          Journal:  Europace        ISSN: 1099-5129            Impact factor:   5.214


Introduction

The field of deep learning (DL), which has seen a dramatic rise in the past decade, is a form of data-driven modelling that serves to identify patterns in data and/or make predictions. It has made substantial impacts in multiple aspects of modern life, from allowing the human voice to execute commands on smartphones to hyperpersonalizing advertisements. In the healthcare space, DL has been leveraged to predict diabetic retinopathy from fundoscopic images, diagnose melanoma from pictures of skin lesions, and segment the ventricle from a cardiac MRI, the latter most of which was recently approved by the FDA, among countless other examples. Given the vast array of imaging modalities (e.g., CT, MRI, echocardiogram) present in cardiology, DL has also been utilized extensively on cardiovascular data to address key clinical issues. Though not formally an imaging modality, electrocardiograms (ECG) may be considered different channels (i.e. leads) of one-dimensional images (i.e. signal intensity in volts over time). While other reviews, have extensively reported the technical details of various examples of applications of DL or focused on machine learning (ML) applications for ECG analysis, a focus on developing an intuitive understanding for the clinician as well as a clinical perspective on the impact of these advances remains lacking. Additionally, the original research articles showcased in these publications are generally over-representative of small open-source datasets, which are marred with concerns of external validity. In addition, there have been many publications recently using DL on ECGs in large, privately curated datasets to solve novel problems, which remain unaddressed by a review. This review will first aim to establish a foundation of knowledge for DL, with an emphasis on explaining why it is best suited for many ECG-related analyses. Subsequently, we will provide an overview of how ECGs can be represented as a data form for DL, with brief coverage on openly available and private datasets. The Application section will build on this knowledge base and explore original DL research on ECGs that focuses on tasks in five domains: arrhythmias, cardiomyopathies, myocardial ischaemia, valvulopathy, and non-cardiac areas of use. This review will conclude with a recapitulation of the current state, limitations, promising endeavours, and recommendations for future clinical and research practice.

On artificial intelligence, machine learning, and deep learning

While a thorough discussion on the details of artificial intelligence (AI) is beyond the scope of this paper, the field and its recent advances will be refreshed for the reader’s benefit. More interested readers are recommended to explore other seminal articles of literature that more exhaustively cover essential knowledge for original research appraisal and endeavours. Simplistically, AI refers to the idea of a computer model that makes decisions using a priori information and improves its performance with experience (i.e. more data). Such clinically related tasks may involve detecting cancerous nodules from CT scans, identifying clusters of disease phenotypes, or optimizing treatment regimens in patients over time., Given its broad definition, AI is necessarily classified into multiple subsets, notably ML and, more recently, DL, which is a subset of ML. Briefly, both ML and DL seek to use data, rather than a fully empirical set of human-generated rules, to solve a problem. Take, for example, the simple task of converting a temperature from Celsius to Fahrenheit. The empirical approach to solving this problem is to explicitly write a program that takes, as an input, a temperature in °C and converts it into an output, its equivalent temperature in °F, by multiplying the input temperature by 1.8 and adding 32. If we suppose that this conversion equation was not known, one can use linear regression, which is common to both statistics and ML as a simple linear model, to offer the computer an initial guess of a representative equation Temp (F) = m×Temp (C) + b. A starting guess is offered for the unknown parameters (in this case m and b) to represent this information (also called a ‘model’), supply it a table of temperatures in °C (called ‘features’) and corresponding °F (referred to as ‘labels’), provide another set of instructions to fit this data to the underlying equation (i.e. ‘optimization’) by minimizing its prediction error (i.e. ‘loss’ or ‘cost function’), and finally execute this instruction set to continually update the parameters with some logic to ultimately fit this data to the underlying equation (i.e. ‘training’). Though simplistically represented, each parenthetical reference above recognizes a key aspect to some of the most integral and defining components for an AI algorithm that, when tuned appropriately, create novel techniques and entire subspecialties in data-driven AI. Additionally, while much of probability and statistics is used to mathematically derive and establish the basis for many machine and DL models, the priority of statistical models tends to lie in inference and understanding of the dataset’s features and their impact on the outcome of interest with generally parametric models. These modelstend to be simpler and not capture non-linearity as well as that of ML or DL models. However, in equivalent and supervised tasks, the simplest AI models prioritize optimizing on outcome prediction instead by engendering more complex model representations. The main drawback, however, is that interpretation of the model’s learned parameters becomes significantly harder than that of its counterparts from more statistical frameworks. Nonetheless, there are nuances between ML and DL that set them apart and are worth discussing. Predominantly, DL separates itself from its parent and predecessor, ML, by the difference in its underlying architecture (which certainly also impacts other facets of the pipeline). Deep learning models are composed of many simple linear models (‘nodes’) arranged in series (each series termed ‘layers’, the number and depth of which contribute eponymously to these models being referred to as ‘deep’) with intervening non-linearities to encourage more complex information representation (Figure ). This sort of hierarchical structure encourages learning simple representations at each layer that build up to learning complex concepts. In the most intuitive example in image recognition tasks, as work by Olshausen et al. and others has shown,,, this amounts to each layer (e.g. convolutional, discussed below) in the series learning simple entities (e.g. lines, circles) that build up into more sophisticated representations (e.g. beaks, feathers, eyes). Understanding important layer types. Two common layer types used in deep learning pipelines for image processing are fully connected layers (top), which function simply as many linear regression models with a non-linear activation function that increases the informational capacity of the model. Convolutional layers (bottom) are composed of many ‘kernels’ that learn particular patterns to pick up (small gradient boxes) and scan across an input signal where that pattern may be present. In this example, the kernels from the top to below represent the shape of a R-S wave, a P-wave, and T-P wave segment, and their relative strengths of detection (high: yellow, low: blue) are shown for the input ECG signal (magenta). The resulting signals demonstrate localization of these key kernel patterns that helps the deep learning model learn both the presence and relationship of such features in the input signal. ECGs, electrocardiograms. By designing models with increased capacity, DL by virtue reduces the need for extensive, manual feature engineering on certain datasets that are not as natively compatible (e.g. raw ECG waveforms, variable-length sequences) with typical ML models. For example, Narula et al. demonstrate the use of an ML algorithm to distinguish physiologic hypertrophy from hypertrophic cardiomyopathy (HCM) using information such as LV volume and wall strain derived from speckle-tracking echocardiogram data. Simplistically speaking, however, DL, by virtue of its greater capacity to perform cohesive tasks like vision and computer knowledge representation, may obviate the need for such manual labelling by its ability to process raw echocardiogram video data and automatically learn important features (which may or may not include or be derived from the aforementioned features) in order to perform the classification step. It is worth noting that these engineered features may also be used for training DL models, but that DL models operating on such and other structured, tabular data (e.g. patient demographics, lab values) have largely been unable to demonstrate an improvement over comparable statistical or ML frameworks, where data complexity is not high enough to provide deep models with an advantage over well-performing shallow models. Of critical importance, the need to relinquish a priori feature establishment may not be apparent to the reader. For example, with respect to the ECG, frameworks for its interpretation (e.g. rate, rhythm, axis, intervals, ventricles) already exist to classify and localize various cardiac diseases. However, despite the relative robustness of these systems, it would be naïve to discount the possible existence of other morphologies indiscernible to the human eye, either locally or as relationships between beats, given the complexity of the cardiac conduction system. In signal processing and imaging, there are many underived features in the raw waveforms and pixels, respectively, which the high-fidelity automatic feature engineering DL offers may take advantage of. Certainly, such indescribable patterns must exist, and though not fully proven, must explain the encouraging results of Attia et al. in predicting paroxysmal atrial fibrillation (AF) in patients from a benign, normal sinus rhythm ECG. However, often the cost of this luxury in capturing complex data representations and improved prediction performance is the aforementioned loss of model interpretability, blanching the technique’s reputation as ‘black-box’. Though methods have been developed to gain more insight into the parameters learned by these models, a notable side effect is overfitting, which is typically caused by having a model with more capacity than relevant information present in the data and required to perform well on the task. This facet permits the model to learn inappropriate aspects about the data, giving the false impression of performing well and causing poor generalizability to other datasets. Typically, this issue arises when large density models are used to perform prediction on small datasets, which is a slippery slope that can easily occur when trying to improve a model’s performance. Overfitting may also occur in response to biases present in the dataset, notably when limiting data acquisition from a single site or manufacturer or when restricting to a subset of the general population. To avoid such pitfalls, it is essential to consider the quality of the dataset, which, if poor enough, may never be overcompensated by any degree of model adjustments. Best practices dictate use of a training set (usually 60–80% of a given dataset but will vary based on data availability and outcome prevalence) for the model to learn the parameters for a given network configuration, a validation set (anywhere from 10% to 20% of the dataset) to learn the best configuration for the model (i.e. the size and number of layers, type of non-linear activations in the models, etc.), and a test set (usually 10–20% of the dataset) to report the final model’s performance. Commonly reported metrics to assess model performance include precision or positive predictive value (PPV), recall (sensitivity), specificity, area under the receiver operator characteristic curve, i.e. AUC-ROC (which reflects the model’s ability to distinguish between different task outcomes), and the F1-statistic (which measures model performance especially in the setting of class imbalance, when one outcome or characteristic is significantly overrepresented in the dataset). While the AUC-ROC, also known as the c-statistic, tends to be the most heavily reported and investigated value, it is important to consider all metrics during appraisal since these metrics are sensitive to the system’s inherent limitations (i.e. class imbalance). Finally, we conclude with an overview and intuitive description of the most common DL architectures encountered during the literature retrieval process. By far, convolutional neural networks (CNN) are the most common architecture used for analysing ECGs. At the heart of these networks is the use of the convolution operation, which is a classical technique in signal processing for localizing key features and reducing noise. Convolution refers to the act of taking a small pattern (so-called ‘kernel’) and identifying where in the input that pattern arises (Figure ), akin to a sliding window. The resulting ‘heat map’ of activity helps to identify where such patterns exist in the image, which can then be used to localize important features, retain global information through successive layers, and remove artefacts deemed unnecessary by the neural network during training. For example, one of the simplest convolutional kernels functions as an edge detector by detecting horizontal or vertical changes in a signal. Serial combinations in parallel and series of these simple edge detectors can allow the CNN to learn how edges combine to form more complex shapes, like the number 9. This generic operation allows sophisticated architectures to be built (i.e. AlexNet, GoogLeNet, DenseNet, ResNet) that achieve state-of-the-art performance on standard image competition datasets (e.g. ImageNet) and serve as inspiration for the development of other models. While CNNs are well-suited for fixed-length spatial data, recurrent neural networks (RNNs), on the other hand, approach problems that are represented as fixed- or variable-length sequences (i.e. word sentences, signals) and characterize the temporal and spatial relationship of data. The core node in this architecture operates in a loop: for each element in the sequence, it transforms that sequence into an output and hidden representation, the latter of which serves as an additional input for the next element in the sequence. In this way, this architecture maintains a memory of the important parts of the sequence and updates the output with that information. Further improvements on this basic design include bi-directional RNNs, gated recurrent units (GRUs), long–short-term memory (LSTM), and attention-transformer networks, which help address the shortcomings of a naïve RNNs and achieve state-of-the-art performance in speech recognition, neural (language) translation, and music generation. As is evident, the classical tasks to which these networks are derived do not readily seem amenable to ECG analysis, given the cyclic format (i.e. heartbeats) and its spatial and temporal duality. Therefore, it is worthwhile to discuss the ECG from a data perspective and how it maintains a high level of compatibility with DL to be served to different types of architectures.

Electrocardiograms as data

Historically, the heartbeat classification and segment identification of the P-QRS-T were the first data analysis tasks to be performed, and they were achieved from a signal processing approach. These ECGs, originally a time series with a signal intensity, were decomposed into wavelike components with Fourier transformation, Hermite techniques, and wavelet transformations. This may be considered a form of feature extraction since these transformations make important features, such as irregularity in rhythm or rhythm frequency, more discernible for downstream models. Such wavelet-based convolutional techniques have achieved a 93% accuracy on the MIT-BIH arrhythmia database. However, ML and DL models have generally achieved better performance with a promise of better generalization and have been favoured since., In that light, for data-driven model development, it becomes important to identify the best way to represent this signal for the task being solved (Figure ). The ECG signal may actually be represented in a variety of fashions, each of which may be amenable to a DL pipeline. First, the ECG itself may be subsampled into individual heartbeats of fixed length, which can generate hundreds to thousands of samples per ECG from which features may be derived and used in a more traditional DL network, such as a fully connected neural network. Additionally, it can be sent as a 2D boolean (zeros or ones) image instead of a 1D signal, which is amenable for diagnosing conditions from a fixed-length ECG strip and is highly compatible for use in more traditional image-based CNN architectures. This signal may be one-dimensional or multi-dimensional, depending on the number of leads used, allowing more information to be captured. Finally, the ECG may be represented as a sequence of beats, each linked to the other in time, and treated as a time series that may be analysed by an RNN-type framework. Supervised deep learning pipeline: this figure shows what a simple deep learning pipeline for ECG analysis may look like. First, ECGs recorded from patients may be stored in an electronic health record system that can be queried for their retrieval (Panel 1). While user-readable formats may be generated when clinicians query the EHR for viewing a patient ECG, these ECGs will be stored as a sequence of numbers with accompanying header information (i.e. patient medical record number, date of ECG acquisition, etc.) in an easily queryable data structure. Next, during time of analysis, all stored patient ECGs may be queried selectively to construct a dataset that is more easily amenable for a DL model (i.e. matrix format) for training and evaluation as well as being relevant for the application of interest (Panel 2). Third, ECGs must be pre-processed for noise removal and baseline variation. These may then be further re-represented as one-dimensional signals, as pixelated images, in the Fourier space, or as wavelets (Panel 3). Finally, the dataset may be split into training, validation, and testing and used to help a deep neural network learn to predict on a particular outcome of interest (Panel 4). ECGs, electrocardiograms. The type of representation chosen for ECG analysis will ultimately depend on the dataset available. A list of the most common freely available datasets encountered in the literature search is shown in Table . The MIT-BIH AF database was the earliest to be released, containing 25 two-lead ECGs, each of which was ∼10 h long. As other databases followed from the same institution (MIT-BIH), the low number of unique patient ECGs was compensated for by their length, which was subsampled to generate thousands of smaller length ECGs centred around each beat and motivated the research endeavours attempting to perfect beat classification in the early days. The Computing in Cardiology Challenge datasets, by introducing much larger datasets, set the stage for novel task definitions (ranging from AF classification, ECG abnormalities, ECG quality, and sleep arousal classification). Additionally, though less clean and without extensive annotations for extensive ML or DL tasks, the MIMIC database gained popularity as well, offering >67 000 ECGs for ICU patients. The past half-decade, however, has also seen a growth in institutional datasets (Table ), which have surpassed the number of annotated ECGs in these open databases by orders of magnitude. While the number of institutions with published evidence of such databases is few, the retrospective collection of ECG data has allowed more cohort-based questions to be asked, many of which are discussed in the sections below. Publicly available ECG datasets This table lists all publicly available ECG datasets present that were the focal point and source of ECG-based data-driven modelling prior to these new, large, privately curated datasets. ECGs, electrocardiograms. AFib, AVB, LBB, NSR, PAC, PVC, RBB, STD, and STE. Applications of ECGs using deep learning This table highlights the 31 applications found during the literature search for ECG analysis, with information about the dataset source, sample size (by unique ECGs and unique patients) present for training and testing, task at hand, and neural network architecture used. Because these studies do not use the same metrics or the same validation protocol to evaluate each model’s performance and because the authors firmly believe that comparison of models is tenuous without greater context beyond what this table can provide, these measures have been omitted from being reported in the table. CNN, convolutional neural network; ECGs, electrocardiograms; LSTM, long–short-term memory; RNN, recurrent neural network.

Applications

This review filtered 31 original research papers to address the applications of DL on ECG identification, starting from a PubMed query for [(‘deep learning’ OR ‘machine learning’ OR ‘artificial intelligence’) AND (‘electrocardiogram’ OR ‘ECG’ OR ‘ecg’ OR ‘electrocardiograph’)] between 1 April 2015 and 15 May 2020 (Figure ). Since many of the original research articles performed beat classification using the open source datasets and were exhaustively addressed in prior reviews, only papers utilizing >1000 unique ECGs (including both training and test data) were included. Paper selection process: consort diagram demonstrating the selection criteria used in retrieving the literature pieces evaluated in this review. The number of articles corresponding to different application categories is also shown.

Arrhythmias

Conduction system abnormalities are the most natural cardiac disorders to tackle with ECGs. Motivated by a relatively high adult population prevalence of around 3%, significant work has been devoted to diagnosing AF, the most common arrhythmia, with few ML works on diagnosing other aberrant waveforms (e.g. ventricular tachyarrhythmias). The problem of its identification by ECG has been subject to many research endeavours encompassing all strokes of AI, such as signal processing, ML, and DL, the lattermost of which is detailed in Table . For what may be the most unique but clinically relevant application, Attia et al., used DL to predict paroxysmal AF from a patient’s first clinically benign (i.e. normal sinus rhythm) ECG with the knowledge that they were ultimately diagnosed at least 30 days after this benign ECG with AF. Using a CNN architecture with residual blocks, which allow deeper models to be trained more efficiently, the authors used 454 789 ECGs from 126 526 patients for training and achieved promising performance. While the study design may suffer from heavy selection bias in failing to address patients with ultimately undiagnosed AF and offers no values for a negative predictive value (NPV) despite suggesting the utility of this model as a screening test, the true utility of this work remains in the innovative approach to using ECG data in a novel way and entertaining the possible adjuvant role of DL in conjunction with CHADS2-VASC for recommending anticoagulation in patients with etiologically cryptogenic stroke and, more generally, the risk of stroke secondary to underlying AF. DL models on ECGs have also been shown to perform at the level of medical professionals. Using only a single ECG lead, Hannun et al. curated a dataset composed of 91 232 ECGs from 53 549 patients in an ambulatory setting. At the cost of having a small testing set, the authors benchmarked the model’s encouraging performance by having expert cardiologists manually annotate all 328 test set ECGs. In this case, these experts performed worse compared with the model in detecting all arrhythmias except junctional rhythm and ventricular tachycardia. At a larger scale, Ribeiro et al., demonstrate end-to-end training on the largest ECG database found in this review, comprising of 1 558 415 ECGs from a tele-ECG service in southeast Brazil, to train a CNN with residual connections to diagnose various arrhythmias, such as AVB Type I, RBBB, LBBB, sinus tachycardia and bradycardia, and AF. Somewhat similar to the case with Hannun et al., the performance of this model, as judged by its PPV, sensitivity, specificity, and AUC, was marginally better when compared with a cohort of medical trainees (residents and medical students). Extending this multi-classification further, Smith et al. additionally refined the ECG classification problem in the scope of triaging ECGs in the ED as normal, abnormal, or emergent, subtyped by the etiology (e.g. ventricular rhythm emergency vs. significant AV conduction) at a single centre study in MN, USA. They investigated the performance of a pre-trained DL model from an industrial partner (Cardiologs Technologies) against conventional, on-board algorithms that detect these abnormalities on the ECG machines themselves (Mortara/Veritas). For a cohort of 1500 randomly sampled ECGs from that year, their DL model showed greater specificity and accuracy in triaging these ECGs, and, despite suffering only from a marginal loss in sensitivity, demonstrated potential for reducing false alarms on the ECGs by ∼50%. Recently, van de Leur et al. also developed a model to triage ECGs, but using a dataset orders of magnitude larger and additionally incorporated a gradient-based ‘saliency feature mapping’, which leverages how the output of a model changes with small changes to different regions of the input signal, to identify important features investigated by the model for different types of presentations. Similar to the models developed by Smith et al., these models retain high specificity (0.88 to 0.98 for different classes) despite low sensitivity, highlighting their use in rapid escalation of care for those flagged by the model. Beyond these private datasets, there were three open datasets that met the inclusion criteria for database size: Computing in Cardiology (CINC) 2017, CINC 2015, and CPSC2018 (later merged into the CINC 2020). In the CINC 2017 competition, which provided contestants with a training set of 8 528 single-lead ECGs for diagnosis of AF vs. NSR, other arrhythmias, and noise, the winner of the competition used an LSTM stacked with an XGBoost classifier (a tree-based ML algorithm). Oster et al. helped externally validate the second-place winner of this competition on 450 four-lead ECGs from the UK Biobank. As expected, the ML algorithm did not generalize well to this novel dataset (F1-score 58.9%); however, a DL model (CNN + LSTM) that was reported after the challenge concluded demonstrated close to a 30% improvement (F1-score 74.1%). In another unique application, a deep CNN trained from AliveCor ECG data, which was the source of the CINC 2017 challenge dataset, was deployed on a single-lead recorder system (KardiaBand, Apple Watch) to continuously monitor for AF in 24 patients., When compared with annotated reports from an insertable cardiac monitor (ICM), the model achieved an encouraging performance (episode sensitivity 97.5% and duration sensitivity 97.7%) on 24 patients, highlighting the utility of DL in creating an inexpensive, non-invasive approach to AF surveillance and management. For the CPSC2018 challenge, Cai et al. added data from additional sources (hospital, ambulatory ECG monitoring device) and trained a DenseNet-inspired CNN to reach state-of-the-art performance on this multi-centre test set, with an AUC of 0.994 and a sensitivity of 99.1% for the three-label classification task (AF, normal, other arrhythmias). Furthermore, the authors explored the parameter weights of the first convolutional layer of their DNN and found the model to learn, as expected by the premise of DL models, low-level features like peaks, troughs, and upward/downward slopes in the signal, which suggests the model’s efforts to remove baseline shifts and identify key landmarks (i.e. P-waves) in diagnosis. Ultimately, tackling arrhythmias is the most classical of pattern recognition problems around the ECG. While their diagnosis has been addressed heavily, few works have investigated the direct role of these inpatient management. To our knowledge, only a few have assessed the characteristics of the ECG that are significant for diagnosis. Further work may be undertaken to integrate and assess the role of these DL solutions in direct clinical care, in application towards screening and diagnosis of less prevalent disease states (e.g. congenital long QT syndrome), in more accurately diagnosing arrhythmias, like complex atrioventricular block and wide-complex QRS tachyarrhythmia, which may be difficult to discern clinically, and in providing insights to predicting outcomes after interventional procedures (e.g. AF ablation).

Valvulopathy

While ECG lacks sensitivity to diagnose valve disease from traditional clinical frameworks, subtle structural changes in response to long-standing valvular disease may be discovered by a DL model to diagnose these pathologies. Indeed, Kwon et al. demonstrate use of an ensemble model, which combines a CNN classifier operating on raw, 12-lead ECG signals and a fully connected network that incorporates demographic information and numeric ECG-derived features (HR, QT interval, QRS duration, QTc, etc.), for classification of severe aortic stenosis (AS) (<1.5 cm2 or mean pressure gradient ≥20 mm Hg, as confirmed by echocardiography). Notably, the authors validated this model on 10 865 patients from a secondary hospital centre, with encouraging AUC of 0.884. The authors also perform a saliency analysis to identify features on the ECG that were most heavily used for AS prediction, identifying the model’s focus on the T-wave in V1–V4, which has been linked with a delayed repolarization from AS-related ventricular hypertrophy. However, the specificity of diagnosing AS relative to other cardiomyopathies was not evaluated in this article, which is an important drawback given that the model may instead be learning to distinguish possible non-specific structural changes secondary to AS, rather than AS itself. With the same motivation, Kwon et al. replicated the above study on patients with significant MR (valve regurgitant orifice area ≥ 0.2 cm2, regurgitation volume ≥ 30 mL, regurgitation fraction ≥ 30%, and MR grade II–IV). In this architecture, they instead opted for a CNN-type network only with raw ECG data as the input and trained on 56 670 ECGs from 24 202 patients in one hospital system. The external validation test set was composed of 10 865 ECGs from another hospital, to which the model had a high sensitivity and NPV at the expense of low specificity and PPV, suggesting its applicability as a screening tool for ruling out MR in patients. A final saliency analysis was notable for the model’s focus on P-wave flattening, which can be explained physiologically as secondary to a more distributive atrial depolarization as a result of atrial stretching from long-standing MR, as well as T-wave abnormalities, which could be prioritized in patients with AF (and thus an absent P-wave) secondary to MR. For patients without MR, the algorithm weighed heavily on the QRS complex, suggesting that the absence of QRS widening is sensitive for eliminating MR.

Cardiomyopathy

With respect to cardiomyopathies, both HCM and LV systolic dysfunction have been the focus of multiple research groups. In a unique study combining elements from DL and ML, Tison et al. trained a modified CNN architecture (U-Net) on a dataset utilizing publicly available and institutional data to automate ECG segment classification (e.g. P wave, PR segment, QRS complex). Rather than opting for an end-to-end DL architecture, the authors subsequently generated a feature vector from a DL model, fed it into a more classical ML algorithm on a set of 35 466 ECGs to predict the presence of pulmonary hypertension, HCM, amyloid detection, and mitral valve prolapse in patients and achieved encouraging AUROCs, as low as 0.78 for MVP prediction and notably at 0.91 for HCM detection. For HCM, Ko et al. at the Mayo Clinic report the use of a CNN to train 12-lead ECGs from ∼47K patients to diagnose HCM. Remarkably, their models achieved extremely high AUCs of 0.96 on the test set, and though suffering from a relatively low PPV of 31%, concomitantly strong model NPVs and sensitivity suggest its use as a screening tool in clinically suspected patients. A secondary analysis showed that their model responded to a patient who underwent septal myomectomy by lowering its diagnostic probability of HCM from 72% before the operation to 2.5% after. Furthermore, this model retained its high performing AUC in a subgroup of patients with left ventricular hypertrophy (LVH), demonstrating its ability to distinguish true HCM (disease) vs. non-HCM LVH (physiologic). Further demonstrating the adaptability of DL architectures to different problems, Kwon et al. extend their architecture for AS classification and apply it to detecting LVH. Training their ensemble classifier leveraging both raw ECG waveforms in a CNN and structured patient data from 35 694 ECGs from 12 648 patients, their model achieves respectable AUCs of 0.87 on a test set from another hospital centre. The model was benchmarked against cardiologists assessing for LVH using the Sokolov–Lyon criteria and outperformed them on sensitivity, while operating at the same specificity level, by 177%. A saliency analysis revealed that the model focused particularly heavily on the QRS complex during an ‘easy’ diagnosis for LVH, in line with clinical criteria, but concentrating on P wave morphology in V1–V3 and T-wave in I and aVR during more difficult cases, for which clinical criteria are generally absent. On a different use case, Attia et al. were the first to report the use of DL to predict low EF (<35%) by training a cohort of 35 970 patients on a simple CNN and achieving an AUC of 0.93 on the test set of 52 870 patients. Of significance, the model’s performance remained agnostic to age and sex unlike BNP, which is sensitive to these patient factors and has been proposed as a marker for low EF despite its lower AUC (0.60). A follow-up study included an additional 6 008 patients who had ECGs for non-cardiac clinical indications but were found to have echocardiograms within a year of this ECG indicative of systolic dysfunction. With high AUCs on this external validation set (0.918), these results are encouraging and suggest, in combination with a BNP level > 150, the model and lab test can be excellent candidates in screening for systolic dysfunction. Noseworthy et al. further assessed this model’s robustness by investigating the impact of different race and ethnic groups on the model’s performance. Notwithstanding the challenges of binning patient ethnicities into a social construct such as race, the authors demonstrated the model’s invariance in predicting LVEF across various races and ethnicities, retaining AUCs >0.93 for each ethnicity. Additionally, the model demonstrated some inherent ability to predict race from an ECG as well (AUCs 0.76–0.84), though this may be falsely elevated given that the model suffers from severe class imbalances (overrepresentation of non-Hispanic whites) in the training set. Kwon et al., greatly extended this demonstration for prediction of reduced EF (EF < 40% and EF < 50% as the primary and secondary study outcomes, respectively) by adding a fully connected neural network trained on both patient-level demographic and ECG-derived data from 13 486 patients to their CNN. The authors report an encouraging model performance (AUC = 0.889 and 0.850 for primary and secondary outcome for external validation set) on an internal and external validation set of ∼10 000 ECGs. It is worth noting that logistic regression and random forest (RF), two fundamental ML techniques, both performed only marginally worse relative to the DL model (AUC = 0.853 and 0.847 for LR and RF, respectively, P < 0.001), which may highlight the limited advantage of DL models on tabular data over statistical or ML techniques. By perturbing input values for different features and analysing the impact on the model’s AUC, the authors identified that the most salient features for the DL model were surprisingly in agreement with those found with logistic regression (e.g. HR, T-wave axis, QRS duration, sex, age), suggestive of the more complex and non-linear interplay between these variables (as able to be represented by their architecture) than a simply linearly weighted one. Future directions include utilizing DL with ECG for early identification for understanding or differentiating other cardiomyopathies that are clinically less well understood, such as heart failure with preserved EF (HFpEF) or cardiac amyloidosis.

Ischaemia

Though myocardial ischaemia is one of the most classical areas of cardiovascular research focus, the literature search only revealed one paper that investigated this domain of cardiovascular disease using ECGs and DL. Tadesse et al. used a popular framework known as transfer learning, where a model that has been trained on one task (i.e. classifying real-world objects from photos) is partially re-trained on a completely new, but structurally similar, dataset to solve another task. By transforming the ECGs into the Fourier space (which simply changes the representation of an ECG signal from a signal intensity vs. time to signal intensity vs. wave frequency) and spatially stacking all 12-leads together (to form a 2D-image), they trained a pre-existing, state-of-the-art image classification model, GoogLeNet, on an openly available Chinese ECG Challenge dataset, and a private curated dataset of ∼17 000 ECGs from patients in Southern China with MI (STEMI and NSTEMI), attaining a respectable accuracy of 86% on the private dataset. However, their model performs notably worse with an accuracy of 49% on the Challenge dataset. Furthermore, despite highlighting an interesting technical method for performing DL on the ECG, the authors fail to disclose appropriate sensitivity, specificity, and AUC analyses, leaving room for another research effort to establish precedence for the use of DL on ECGs for patients with ischaemic cardiac disease. Future directions may involve detection of subclinical CAD along, or prior to, the ischaemic heart disease spectrum (e.g. stable angina, unstable angina, etc.).

Extracardiac

Outside the immediate realm of cardiological disease, though certainly not without an impact on the heart, DL has been applied to ECGs in two major areas: identifying electrolyte abnormalities and prognosticating health status. Physiologically, deviations from baseline in either electrolytes or mental illness (i.e. anxiety) have been reported to show short-term and long-term effects on cardiac structure and function, which encourages the study of ECGs to identify the underlying disease state even more. The sensitivity for diagnosing hyperkalaemia from ECGs, though classically characterized on the ECG by T-wave peaks, PR shortening, QRS prolongation, remains low (34–43%). With this in mind, Galloway et al., conducted a multi-centre study on patients from various Mayo Clinic sites in the US to identify the presence of hyperkalaemia in chronic kidney disease patients using 2- and 4-lead ECGs. Despite low specificity for hyperkalaemia, their model achieved respectable accuracies and sensitivities on these external validation sets, suggesting the role of ECGs for hyperkalaemia screening. Lin et al. extended this study to predict either hypo- or hyperkalaemia with a single-centre database of 66 321 ECGs to all patients (irrespective of kidney disease) and attained better sensitivity, specificity, and accuracy on their test set when benchmarked against emergency physicians and cardiologists. Unlike the Mayo Clinic, this model retained high specificity (0.92) at the expense of low sensitivity (0.67), which is more akin to its application as a diagnostic tool instead of a screening one. Notably, the authors additionally performed a saliency analysis of the features, which showed a greater focus on the ST segment in those cases of hyperkalaemia that were more difficult to clinically identify (i.e. low sensitivity and high inter-rater variability). In addition to hyper/hypokalaemia, other electrolytes such as magnesium and calcium levels can be assessed here, notably to predict, in real-time, the likelihood of impending arrhythmias like Torsades de Pointes. Beyond prediction of clinical disease and lab values reflective of disease severity, ECGs, as biometric data points over time, have the potential to capture measures of overall health as well. For example, the epitome of an elderly individual maintaining a prime state of health is captured by that individual having a ‘young heart’. Thus, the idea of an ‘ECG age’ vs. biological age can be inspired and is addressed in another piece by Attia et al., which sought to predict patient age using ECG. Subgroup analysis of this study revealed those cases with the largest error in prediction were found to have significantly more instances of systolic dysfunction, hypertension, and CAD, whereas those individuals in which the prediction accuracy was higher (i.e. less error) were found to have fewer cardiovascular incidents at follow-up. Though there are certain implications of overinterpreting this information, since this error could capture both the severity of cardiac disease (e.g. higher age) and also random error in model training, these results encourage the belief that an ECG may be used as a composite biomarker to track general health over time. In further corroboration of this possible role, Raghunath et al. report prediction of 1-year mortality from age, sex, and baseline ECGs using a convolutional framework with a hazard ratio of 9.5 over the two predicted dead/alive groups, further corroborating the prognostic role of an ECG in a patient’s global health. The authors also employ the use of a gradient-based class activation mapping to assess feature importance and note that the model discerned ST-elevations in certain patients as notable contributors to prediction of mortality within 1-year. However, given that these ECGs were retrieved from a hospital setting, care must be taken not to apply this model, which is prone to a heavy selection bias, on the general population.

Conclusions

When applied to large datasets that contain hidden but valuable relationships, DL has delivered groundbreaking performance. ECGs, laden with information-rich spatial and/or temporal views of the cardiac conduction system, have been amenable to having these hidden associations with cardiovascular pathologies (arrhythmias, cardiomyopathies, valvulopathies, and ischaemia) unravelled, as demonstrated by the original research articles contained within this review. Their role is certainly apparent in future endeavours, as multiple clinical trials have been created to prospectively collect ECG data for not only understanding more about their respective heart disease of interest but also validating existing DL models on these newly collected datasets in the form of a randomized, control trials. Nevertheless, difficulties in data access and model sharing, as well as limited flexibility of pre-existing IT infrastructures, are barriers that must be addressed before these algorithms can be deployed to other hospital systems. Despite its promise, the shortcomings of these endeavours are readily apparent in the incongruence between model design, model validation, and model interpretation. For example, utilizing DL for feature extraction and performing ML on those features in series is in concept an interesting idea, but certainly carries with it the perils of not abiding by the fundamental hierarchical tenets of DL. Similarly, rigorous practices to ensure an appropriate validation of the model are of crucial importance. Because most datasets thus far have been curated from a single centre, they run the risk of overfitting and generalizing poorly to other hospital systems and other datasets, which not only may have different machines that could have slight variations in the underlying noise that may not be readily filtered for by the model. By extension, adversarial (i.e. simulated noise) training would take advantage of generative adversarial networks (GANs), which are DL models trained to discriminate random generated inputs vs. true dataset inputs and subsequently generate new samples that are more resilient to noise, that have made great strides in improving model performance when additionally trained with subtle but key noisy artefacts. Additionally, no central framework exists for comparing the performance of these various models from one institution with another. An open framework to permit such an exchange of ideas, datasets, and pre-trained model weights is not a trivial task, but can foster an environment for collaboration between what are apparent institutional silos of development. While every original research article covered in this paper offers encouraging results for the value of DL in interpreting ECGs, only a handful offer insight into the model’s learning representation of the ECG for the respective task.,,, Without explaining what these DL models are sensing on the ECG to perform their specific task in an interpretable way, developers of these tools run a strong risk of souring the clinician, who needs to understand how these models work before entrusting them to augment their practice, to adopting these tools. Methods to open the ‘black box’ of DL have been elucidated in detail elsewhere, offering more than a handful of techniques to evaluate both input feature importance and layer-wise information retention. Such techniques may not only make reduction of these algorithms in clinical practice more palatable but may also offer hypotheses on the pathophysiology of disease that may improve its understanding and possibly reduce the barriers to reduction to practice. Additionally, the trials and tribulations for model selection are not apparent in the methodologies for many papers, which does not instill confidence in the rigor of the model development that is otherwise heavily and rightfully emphasized by the computer science community. The question to be asked is not whether DL can solve a task, but which DL method and why can best tackle the task. Adherence to these suggested principles of research reporting may create cohesion in the research field by virtue of models and datasets being more amenable to each other, which could in turn foster improved collaboration between research groups. For example, in diagnosing valvulopathies, it is difficult to know, given the current findings in this space, how much of the model is dependent on the effect of the continued altered flow mechanics that create subclinical perturbations in the ECG signal vs. long-standing changes to the heart, which may or may not be specific for that pathology. Performance of classifiers predicting relevant physiological cardiomyopathies or augmenting the original dataset with data from patients with non-valvular cardiomyopathy could help improve the robustness of these original seminal works in DL. In conclusion, though the emerging literature evaluating the role of DL in ECG analysis has shown great promise and potential, with continued improvement, generalization, refinement, and standardization of methods and data to improve the short-term drawbacks in reduction to clinical practice, DL offers the ability to improve a novel way of diagnosing and managing heart disease. The concurrent development of wearable technologies and accessible platforms for deploying pre-trained DL models offers a unique and scalable opportunity to screen for and intervene early in different cardiovascular disease states. Conflict of interest: S.S. is a co-founder of and owns equity in Monogram Orthopedics. G.N.N. has received consulting fees from AstraZeneca, Reata, BioVie, and GLG Consulting; has received financial compensation as a scientific board member and advisor to RenalytixAI; and owns equity in RenalytixAI and Pensieve Health as a cofounder. All remaining authors have declared no conflicts of interest.
Table 1

Publicly available ECG datasets

NameYearNumber of leadsNumber of ECGs (patients)ECG lengthLabels
MIMIC-III2017Variables67 830VariableNone
Computing in Cardiology 20172017112 18630 sAtrial fibrillation classification
Computing in Cardiology 2020202012688730 sECG abnormalitiesa
Computing in Cardiology 2011201112200010 sECG quality
Computing in Cardiology 2018201811985HoursSleep arousal classification
Computing in Cardiology 20152015212505 minFalse arrhythmia classification
Chinese Cardiovascular Disease Database201012100010 sBeat classification, ECG abnormalities
Computing in Cardiology 20142014170010 minQRS beat classification
PTB diagnostic ECG1995165492 minDiagnosis (MI, CHF, BBB, Arrhythmia, HCM, VHD, normal)
SHAREE2015313924 hAdverse vascular event prediction
Long-term ST DB200328621–24 hST-segment events
MIT-BIH supraventricular arrhythmia19907830 minBeat classification, ECG abnormalitiesa
St. Petersburg INCART DB2008127530 minBeat labelling
MIT-BIH arrhythmia DB200124830 minBeat classification, ECG abnormalitiesa
MIT-BIH ST change DB199928VariableBeat labelling
MIT-BIH atrial fibrillation DB198322510 hRhythm annotation (AFib, Aflutter, AV junctional rhythm, N)
Sudden cardiac death DB198923∼24 hVF
MIT-BIH malignant ventricular ectopy DB19862230 minSVT, VF, VFib
MIT-BIH normal sinus rhythm DB199918Long-termBeat labelling
BIDMC CHF DB198621520 hBeat classification
MIT-BIH arrhythmia database P-wave annotations201821230 minP-wave labels

This table lists all publicly available ECG datasets present that were the focal point and source of ECG-based data-driven modelling prior to these new, large, privately curated datasets.

ECGs, electrocardiograms.

AFib, AVB, LBB, NSR, PAC, PVC, RBB, STD, and STE.

Table 2

Applications of ECGs using deep learning

CitationCategoryPrediction taskDatasetNumber of ECGsNumber of patientsArchitecture
Parvaneh et al. 13 (2018)ArrhythmiasAtrial fibrillationCINC 201712 18612 186CNN + RNN
Xiong et al. (2018) 77ArrhythmiasArrhythmiaCINC 201712 18612 186CNN
Ribeiro et al. (2019) 43ArrhythmiasArrhythmiaTelehealth network of Minas Gerais1 558 4151 558 415Ensemble (CNN, DNN)
Attia et al.26ArrhythmiasParoxysmal AFMayo Clinic649 931180 922CNN + GBM
Wang et al. (2019)78ArrhythmiasArrhythmiaCCDB193 690193 690CNN
Hannun et al.42ArrhythmiasArrhythmiaiRhythm91 23253 549CNN
Brisk et al. (2019)79ArrhythmiasArrhythmiaCINC 201712 18612 186CNN
Wasserlauf et al.49ArrhythmiasAtrial fibrillationCINC 201775007500CNN + LSTM + SVM
Ivanovic et al. (2019)80ArrhythmiasAtrial fibrillationSerbia10971097CNN
Smith et al.44ArrhythmiasArrhythmiaCardiolog14731473CNN
Mousavi et al. (2020) 80ArrhythmiasArrhythmiaCINC 201512501250CNN (DDDN)
Van de Leur et al.45ArrhythmiasArrhythmia triage in the EDUniversity Medical Center Utrecht336 835142 040Residual CNN
Oster et al. (2020)81ArrhythmiasAtrial fibrillationUK Biobank77 20275 778CNN
Wang et al.27ArrhythmiasArrhythmiaTianchi competition20 03620 036CNN/HMM + GBM
Chen et al. (2020)82ArrhythmiasArrhythmiaCPSC201868776877CNN + GBM
Cai et al.50ArrhythmiasAtrial fibrillationChinese PLA General Hospital, wearable ECGs, CPSC201816 55711 994CNN
Tison et al.54CardiomyopathyHeart failure, PAH, MVPUCSF36 18636 186Ensemble (CNN, DNN)
Kwon et al.61CardiomyopathyHeart failureMediplex Sejong Hospital55 16322 765CNN
Attia et al. 59CardiomyopathyHeart failureMayo Clinic3 8743 874CNN + LSTM + SVM
Attia et al. 57CardiomyopathyHeart failureMayo Clinic97 82997 829CNN
Kwon et al. 56CardiomyopathyLeft ventricular hypertrophySejong General Hospital, Mediplex Sejong Hospital; Korea21 28621 286CNN
Yoon et al. (2019)83ExtracardiacNoise detectionAjou University Hospital; Korea30003000CNN
Ko et al.55CardiomyopathyHypertrophic cardiomyopathyMayo Clinic67 00167 001CNN + RNN
Attia et al.67ExtracardiacAge, SexMayo Clinic774 783774 783CNN
Galloway et al.65ExtracardiacHyperkalaemiaMayo Clinic1 638 546449 380CNN
Lin et al.66ExtracardiacHyperkalaemiaTri-Service General Hospital; Taiwan66 32140 180CNN
Wang et al.27ExtracardiacPre-diabetesBeijing, China29142914CNN
Noseworthy et al.60ExtracardiacRacial BiasMayo Clinic97 82997 829CNN
Raghunath et al.68ExtracardiacMortalityGeisinger Hospital System1 338 576422 311CNN
Kwon et al.53ExtracardiacPulmonary hypertensionSejong General Hospital, Mediplex Sejong Hospital; Korea59 84423 376CNN
Han et al.75ExtracardiacNoise, Adversarial attackCINC 201712 18612 186CNN
Tadesse et al.62IschaemiaMyocardial infarction (STEMI, NSTEMI)GGH21 24121 241CNN
Kwon et al.52ValvulopathyAortic stenosisSejong General Hospital, Mediplex Sejong Hospital; Korea39 37139 371CNN
Kwon et al.53ValvulopathyMitral regurgitationSejong General Hospital, Mediplex Sejong Hospital; Korea70 70938 241CNN + RNN

This table highlights the 31 applications found during the literature search for ECG analysis, with information about the dataset source, sample size (by unique ECGs and unique patients) present for training and testing, task at hand, and neural network architecture used. Because these studies do not use the same metrics or the same validation protocol to evaluate each model’s performance and because the authors firmly believe that comparison of models is tenuous without greater context beyond what this table can provide, these measures have been omitted from being reported in the table.

CNN, convolutional neural network; ECGs, electrocardiograms; LSTM, long–short-term memory; RNN, recurrent neural network.

  62 in total

1.  Secular trends in incidence of atrial fibrillation in Olmsted County, Minnesota, 1980 to 2000, and implications on the projections for future prevalence.

Authors:  Yoko Miyasaka; Marion E Barnes; Bernard J Gersh; Stephen S Cha; Kent R Bailey; Walter P Abhayaratna; James B Seward; Teresa S M Tsang
Journal:  Circulation       Date:  2006-07-03       Impact factor: 29.690

Review 2.  Deep learning.

Authors:  Yann LeCun; Yoshua Bengio; Geoffrey Hinton
Journal:  Nature       Date:  2015-05-28       Impact factor: 49.962

3.  Evaluation of Risk Prediction Models of Atrial Fibrillation (from the Multi-Ethnic Study of Atherosclerosis [MESA]).

Authors:  Joshua D Bundy; Susan R Heckbert; Lin Y Chen; Donald M Lloyd-Jones; Philip Greenland
Journal:  Am J Cardiol       Date:  2019-10-10       Impact factor: 2.778

4.  Artificial intelligence for early prediction of pulmonary hypertension using electrocardiography.

Authors:  Joon-Myoung Kwon; Kyung-Hee Kim; Jose Medina-Inojosa; Ki-Hyun Jeon; Jinsik Park; Byung-Hee Oh
Journal:  J Heart Lung Transplant       Date:  2020-04-23       Impact factor: 10.247

5.  Prospective validation of a deep learning electrocardiogram algorithm for the detection of left ventricular systolic dysfunction.

Authors:  Zachi I Attia; Suraj Kapa; Xiaoxi Yao; Francisco Lopez-Jimenez; Tarun L Mohan; Patricia A Pellikka; Rickey E Carter; Nilay D Shah; Paul A Friedman; Peter A Noseworthy
Journal:  J Cardiovasc Electrophysiol       Date:  2019-03-10

6.  Deep learning models for electrocardiograms are susceptible to adversarial attack.

Authors:  Xintian Han; Yuxuan Hu; Luca Foschini; Larry Chinitz; Lior Jankelson; Rajesh Ranganath
Journal:  Nat Med       Date:  2020-03-09       Impact factor: 53.440

7.  Identification of patients with atrial fibrillation: a big data exploratory analysis of the UK Biobank.

Authors:  Julien Oster; Jemma C Hopewell; Klemen Ziberna; Rohan Wijesurendra; Christian F Camm; Barbara Casadei; Lionel Tarassenko
Journal:  Physiol Meas       Date:  2020-03-06       Impact factor: 2.833

8.  Deep Learning Approach for Highly Specific Atrial Fibrillation and Flutter Detection based on RR Intervals.

Authors:  Marija D Ivanovic; Vladimir Atanasoski; Alexei Shvilkin; Ljupco Hadzievski; Aleksandra Maluckov
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2019-07

Review 9.  Machine learning in the electrocardiogram.

Authors:  Ana Mincholé; Julià Camps; Aurore Lyon; Blanca Rodríguez
Journal:  J Electrocardiol       Date:  2019-08-08       Impact factor: 1.438

10.  Deep Learning-Based Electrocardiogram Signal Noise Detection and Screening Model.

Authors:  Dukyong Yoon; Hong Seok Lim; Kyoungwon Jung; Tae Young Kim; Sukhoon Lee
Journal:  Healthc Inform Res       Date:  2019-07-31
View more
  9 in total

1.  Mobile Application Can Now Assist to Diagnose Arrhythmias with Collective Intelligence.

Authors:  In Soo Kim
Journal:  Korean Circ J       Date:  2021-04       Impact factor: 3.243

2.  Using Deep-Learning Algorithms to Simultaneously Identify Right and Left Ventricular Dysfunction From the Electrocardiogram.

Authors:  Akhil Vaid; Kipp W Johnson; Marcus A Badgeley; Sulaiman S Somani; Mesude Bicak; Isotta Landi; Adam Russak; Shan Zhao; Matthew A Levin; Robert S Freeman; Alexander W Charney; Atul Kukar; Bette Kim; Tatyana Danilov; Stamatios Lerakis; Edgar Argulian; Jagat Narula; Girish N Nadkarni; Benjamin S Glicksberg
Journal:  JACC Cardiovasc Imaging       Date:  2021-10-13

Review 3.  [Artificial intelligence-based ECG analysis: current status and future perspectives-Part 1 : Basic principles].

Authors:  Wilhelm Haverkamp; Nils Strodthoff; Carsten Israel
Journal:  Herzschrittmacherther Elektrophysiol       Date:  2022-05-12

4.  Practical Lessons on 12-Lead ECG Classification: Meta-Analysis of Methods From PhysioNet/Computing in Cardiology Challenge 2020.

Authors:  Shenda Hong; Wenrui Zhang; Chenxi Sun; Yuxi Zhou; Hongyan Li
Journal:  Front Physiol       Date:  2022-01-14       Impact factor: 4.566

5.  Development of a machine learning model using electrocardiogram signals to improve acute pulmonary embolism screening.

Authors:  Sulaiman S Somani; Hossein Honarvar; Sukrit Narula; Isotta Landi; Shawn Lee; Yeraz Khachatoorian; Arsalan Rehmani; Andrew Kim; Jessica K De Freitas; Shelly Teng; Suraj Jaladanki; Arvind Kumar; Adam Russak; Shan P Zhao; Robert Freeman; Matthew A Levin; Girish N Nadkarni; Alexander C Kagen; Edgar Argulian; Benjamin S Glicksberg
Journal:  Eur Heart J Digit Health       Date:  2021-11-25

6.  An Effective and Lightweight Deep Electrocardiography Arrhythmia Recognition Model Using Novel Special and Native Structural Regularization Techniques on Cardiac Signal.

Authors:  Hadaate Ullah; Md Belal Bin Heyat; Hussain AlSalman; Haider Mohammed Khan; Faijan Akhtar; Abdu Gumaei; Aaman Mehdi; Abdullah Y Muaad; Md Sajjatul Islam; Arif Ali; Yuxiang Bu; Dilpazir Khan; Taisong Pan; Min Gao; Yuan Lin; Dakun Lai
Journal:  J Healthc Eng       Date:  2022-04-12       Impact factor: 3.822

7.  Deep-Learning-Based Detection of Paroxysmal Supraventricular Tachycardia Using Sinus-Rhythm Electrocardiograms.

Authors:  Lei Wang; Shipeng Dang; Shuangxiong Chen; Jin-Yu Sun; Ru-Xing Wang; Feng Pan
Journal:  J Clin Med       Date:  2022-08-05       Impact factor: 4.964

8.  Deep learning assessment of left ventricular hypertrophy based on electrocardiogram.

Authors:  Xiaoli Zhao; Guifang Huang; Lin Wu; Min Wang; Xuemin He; Jyun-Rong Wang; Bin Zhou; Yong Liu; Yesheng Lin; Dinghui Liu; Xianguan Yu; Suzhen Liang; Borui Tian; Linxiao Liu; Yanming Chen; Shuhong Qiu; Xujing Xie; Lanqing Han; Xiaoxian Qian
Journal:  Front Cardiovasc Med       Date:  2022-08-11

Review 9.  Clinical significance, challenges and limitations in using artificial intelligence for electrocardiography-based diagnosis.

Authors:  Cheuk To Chung; Sharen Lee; Emma King; Tong Liu; Antonis A Armoundas; George Bazoukis; Gary Tse
Journal:  Int J Arrhythmia       Date:  2022-10-01
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.