Literature DB >> 34347007

Performance of a Convolutional Neural Network and Explainability Technique for 12-Lead Electrocardiogram Interpretation.

J Weston Hughes1, Jeffrey E Olgin2,3, Robert Avram2,3, Sean A Abreau2,3, Taylor Sittler4, Kaahan Radia1, Henry Hsia2, Tomos Walters2, Byron Lee2, Joseph E Gonzalez1, Geoffrey H Tison1,2,3,5.   

Abstract

Importance: Millions of clinicians rely daily on automated preliminary electrocardiogram (ECG) interpretation. Critical comparisons of machine learning-based automated analysis against clinically accepted standards of care are lacking. Objective: To use readily available 12-lead ECG data to train and apply an explainability technique to a convolutional neural network (CNN) that achieves high performance against clinical standards of care. Design, Setting, and Participants: This cross-sectional study was conducted using data from January 1, 2003, to December 31, 2018. Data were obtained in a commonly available 12-lead ECG format from a single-center tertiary care institution. All patients aged 18 years or older who received ECGs at the University of California, San Francisco, were included, yielding a total of 365 009 patients. Data were analyzed from January 1, 2019, to March 2, 2021. Exposures: A CNN was trained to predict the presence of 38 diagnostic classes in 5 categories from 12-lead ECG data. A CNN explainability technique called LIME (Linear Interpretable Model-Agnostic Explanations) was used to visualize ECG segments contributing to CNN diagnoses. Main Outcomes and Measures: Area under the receiver operating characteristic curve (AUC), sensitivity, and specificity were calculated for the CNN in the holdout test data set against cardiologist clinical diagnoses. For a second validation, 3 electrophysiologists provided consensus committee diagnoses against which the CNN, cardiologist clinical diagnosis, and MUSE (GE Healthcare) automated analysis performance was compared using the F1 score; AUC, sensitivity, and specificity were also calculated for the CNN against the consensus committee.
Results: A total of 992 748 ECGs from 365 009 adult patients (mean [SD] age, 56.2 [17.6] years; 183 600 women [50.3%]; and 175 277 White patients [48.0%]) were included in the analysis. In 91 440 test data set ECGs, the CNN demonstrated an AUC of at least 0.960 for 32 of 38 classes (84.2%). Against the consensus committee diagnoses, the CNN had higher frequency-weighted mean F1 scores than both cardiologists and MUSE in all 5 categories (CNN frequency-weighted F1 score for rhythm, 0.812; conduction, 0.729; chamber diagnosis, 0.598; infarct, 0.674; and other diagnosis, 0.875). For 32 of 38 classes (84.2%), the CNN had AUCs of at least 0.910 and demonstrated comparable F1 scores and higher sensitivity than cardiologists, except for atrial fibrillation (CNN F1 score, 0.847 vs cardiologist F1 score, 0.881), junctional rhythm (0.526 vs 0.727), premature ventricular complex (0.786 vs 0.800), and Wolff-Parkinson-White (0.800 vs 0.842). Compared with MUSE, the CNN had higher F1 scores for all classes except supraventricular tachycardia (CNN F1 score, 0.696 vs MUSE F1 score, 0.714). The LIME technique highlighted physiologically relevant ECG segments. Conclusions and Relevance: The results of this cross-sectional study suggest that readily available ECG data can be used to train a CNN algorithm to achieve comparable performance to clinical cardiologists and exceed the performance of MUSE automated analysis for most diagnoses, with some exceptions. The LIME explainability technique applied to CNNs highlights physiologically relevant ECG segments that contribute to the CNN's diagnoses.

Entities:  

Mesh:

Year:  2021        PMID: 34347007      PMCID: PMC8340011          DOI: 10.1001/jamacardio.2021.2746

Source DB:  PubMed          Journal:  JAMA Cardiol            Impact factor:   30.154


  3 in total

Review 1.  The role of machine learning applications in diagnosing and assessing critical and non-critical CHD: a scoping review.

Authors:  Stephanie M Helman; Elizabeth A Herrup; Adam B Christopher; Salah S Al-Zaiti
Journal:  Cardiol Young       Date:  2021-11-02       Impact factor: 1.093

2.  A Retrospective Clinical Evaluation of an Artificial Intelligence Screening Method for Early Detection of STEMI in the Emergency Department.

Authors:  Dongsung Kim; Ji Eun Hwang; Youngjin Cho; Hyoung-Won Cho; Wonjae Lee; Ji Hyun Lee; Il-Young Oh; Sumin Baek; Eunkyoung Lee; Joonghee Kim
Journal:  J Korean Med Sci       Date:  2022-03-14       Impact factor: 2.153

3.  Automated multilabel diagnosis on electrocardiographic images and signals.

Authors:  Veer Sangha; Bobak J Mortazavi; Adrian D Haimovich; Antônio H Ribeiro; Cynthia A Brandt; Daniel L Jacoby; Wade L Schulz; Harlan M Krumholz; Antonio Luiz P Ribeiro; Rohan Khera
Journal:  Nat Commun       Date:  2022-03-24       Impact factor: 14.919

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.