| Literature DB >> 34677739 |
Zijian Ding1, Guijin Wang2, Huazhong Yang1, Ping Zhang3,4, Dapeng Fu5, Zhen Yang6, Xinkang Wang7, Xia Wang8, Zhourui Xia9, Chiming Zhang10, Wenjie Cai11, Binhang Yuan12, Dongya Jia13, Bo Chen14, Chengbin Huang15, Jing Zhang16, Yi Li17, Shan Yang18, Runnan He19.
Abstract
Computerized interpretation of electrocardiogram plays an important role in daily cardiovascular healthcare. However, inaccurate interpretations lead to misdiagnoses and delay proper treatments. In this work, we built a high-quality Chinese 12-lead resting electrocardiogram dataset with 15,357 records, and called for a community effort to improve the performances of CIE through the China ECG AI Contest 2019. This dataset covers most types of ECG interpretations, including the normal type, 8 common abnormal types, and the other type which includes both uncommon abnormal and noise signals. Based on the Contest, we systematically assessed and analyzed a set of top-performing methods, most of which are deep neural networks, with both their commonalities and characteristics. This study establishes the benchmarks for computerized interpretation of 12-lead resting electrocardiogram and provides insights for the development of new methods. Graphical Abstract A community effort to assess and improve computerized interpretation of 12-lead resting electrocardiogram.Entities:
Keywords: Computersized interpretation of electrocardiogram; Deep neural networks; Electrocardiogram; Model assessment
Mesh:
Year: 2021 PMID: 34677739 PMCID: PMC8724189 DOI: 10.1007/s11517-021-02420-z
Source DB: PubMed Journal: Med Biol Eng Comput ISSN: 0140-0118 Impact factor: 2.602
Fig. 1Multi-label clinical interpretations of all ECG records in the CEAC dataset. (a) reflects the multi-label characters among each pair of clinical interpretations. (b) shows that almost 15 percents of all records contain more than one interpretations
Fig. 2Basic statistics of all records in the CEAC dataset. (a) shows the age distributions among each clinical interpretation, (b) shows that the distribution of time length of all records. The error bars are percentiles
Summary of the Top-Performing 11 Benchmark Methods. All methods are ranked according to their F1 scores. The network structures are summarized and their characteristics are shown as in data augmentataion and transfer learning, etc
1FL refers to focal loss. 2CE refers to cross entropy
Fig. 3Assessing the F1 scores of the top 11 methods. (a) shows the three interpretations with the highest average scores, (b) shows three with the modest scores and (c) shows the lowest
Fig. 4The commonalities of all top-performing deep neural networks. CNN layers combined with RNN layers and attention modules can achieve good performances
Fig. 5Incorporating external information is one way to alleviate the overfitting problem common to deep neural networks. Some top-performing methods either learning knowledge from other dataset or from expert knowledge