| Literature DB >> 35720677 |
Pavidra Sivanandarajah1,2, Huiyi Wu1, Nikesh Bajaj1, Sadia Khan2, Fu Siong Ng1,2.
Abstract
Atrial fibrillation (AF) is the most common arrhythmia and causes significant morbidity and mortality. Early identification of AF may lead to early treatment of AF and may thus prevent AF-related strokes and complications. However, there is no current formal, cost-effective strategy for population screening for AF. In this review, we give a brief overview of targeted screening for AF, AF risk score models used for screening and describe the different screening tools. We then go on to extensively discuss the potential applications of machine learning in AF screening.Entities:
Keywords: Artificial intelligence; Atrial fibrillation; Electronic health records; Machine learning; Screening
Year: 2022 PMID: 35720677 PMCID: PMC9204790 DOI: 10.1016/j.cvdhj.2022.04.001
Source DB: PubMed Journal: Cardiovasc Digit Health J ISSN: 2666-6936
Risk score models for atrial fibrillation detection
| Risk score model | Factors | Internal validation | External validation | Prediction |
|---|---|---|---|---|
| FRAMINGHAM | Age, blood pressure, hypertension, body mass index, PR interval, murmur, age of heart failure | C-statistic 0.78 | C-statistic 0.65–0.73 | 10 -year risk of AF |
| ARIC | Age, race, height, systolic blood pressure, hypertension medication, smoking, murmur, left atrial enlargement, LVH, diabetes mellitus, heart failure, age of coronary heart disease | C-statistic 0.78 | - | 10-year risk of AF |
| CHARGE-AF | Age, race, height, weight, systolic blood pressure, diastolic blood pressure, murmur, hypertension medication, diabetes mellitus, heart failure, myocardial infarction, LVH on ECG, PR interval | C-statistic 0.765 | C-statistic 0.66–0.81 | 5-year risk of AF |
| CHA2DS2-VASc | Congestive heart failure, hypertension, age, diabetes mellitus, vascular disease, stroke, sex | - | FHS: 0.71 | Stroke risk |
| HATCH | Heart failure, age, previous transient ischemic attack, chronic pulmonary obstructive disease, hypertension | - | Taiwan cohort: 0.7711 | Progression of paroxysmal AF to persistent AF |
| Suita | Age, systolic hypertension, weight, alcohol intake, smoking, other arrhythmia, coronary artery disease, cardiac murmur | C-statistic 0.749 | - | 10-year risk of AF |
| Japanese simple risk score | Age, sex, waist circumference, diastolic blood pressure, alcohol intake | C-statistic 0.77 | - | 7-year risk of AF |
| C2HEST | Coronary artery disease, chronic obstructive pulmonary disease, hypertension, elderly, systolic heart failure, thyroid disease | C-statistic 0.75 | Korean cohort: 0.65 | AF risk |
| Taiwan AF score | Age, sex, hypertension, heart failure, coronary artery disease, end-stage renal disease, alcoholism | AUROC 0.862 at 1 year, 0.755 at 16 years | - | 1- to 16-year risk of AF |
| SAAFE | Age, height, weight, congestive heart failure, coronary artery disease, chronic obstructive pulmonary disease, cardiac arrest, coronary artery stenting, stroke, diabetes, kidney transplant | C-statistic 0.785/0.804 | ARIC 0.766 | Incident AF risk |
| Electronic health record risk score | Sex, age, race, smoking, height, weight, blood pressure, cardiovascular and cardiometabolic factors | C-statistic 0.77 | C-index 0.808 | 5-year risk of AF |
AF = atrial fibrillation; AUROC = area under the receiver operating characteristic curve; ECG = electrocardiogram; LVH = left ventricular hypertrophy.
Devices to detect and screen for atrial fibrillation
| Rhythm modality | Reference | Results |
|---|---|---|
| Handheld/smartphone single-lead ECG recorder | William et al 2018, | Algorithm sensitivity 0.966 |
| Handheld/smartphone single-lead ECG recorder | Vaes et al 2014 | Sensitivity 0.94 (0.87–0.98) |
| Smartwatch single-lead ECG recorder | Wasserlauf et al 2019 | Sensitivity 0.975 when compared to implantable cardiac monitor |
| Smartphone PPG | Brasier et al 2019 | Sensitivity 0.915 |
| Smartphone PPG | Proesmans et al 2019 | Sensitivity 0.96 |
| Smartphone PPG | Poh et al 2018 | Sensitivity 1 |
| Smartphone PPG (iPhone 4S) | McManus et al 2013 | Sensitivity 0.962 |
| Smartphone PPG (iPhone 4S) | Lee et al 2013 | Sensitivity 0.7461–0.9763 |
| Smartwatch PPG | Bashar et al 2019 | Sensitivity 0.982 |
| Smartwatch PPG | Dörr et al 2019 | Sensitivity 0.937 (0.898–0.964) |
| Smartwatch PPG | Bumgarner et al 2018 | Sensitivity 0.93 (0.86–0.99) |
| ECG patch monitor | Okubo et al 2021 | Sensitivity 0.98 |
| Blood pressure monitor | Wiesel et al 2009 | Sensitivity 0.95 |
ECG = electrocardiogram; PPG = photoplethysmography.
Examples of machine learning and deep learning models
| Machine learning model | Principle | |
|---|---|---|
| Logistic regression | Uses a linear combination of variable with logistic function for the classification | |
| Decision trees / random forest | Uses the binary tree–based approach | |
| Naïve Bayes | Uses probabilistic approach–based Bayes theorem and assumes that all the input variables are independent | |
| Linear discriminant analysis | A dimensionality reduction method for classification. It looks for linear combinations of variables that best explain the data, and separates 2 or more classes of objects | |
| Support vector machines | Uses a kernel-based hyperplane that separates the data points with maximum margin | |
| K-nearest neighbor | Uses the distance (eg, Euclidian) of a given data point to K nearest data points using known data points (training set) to predict the class/value. It is based on assumption that data points in near proximity have similar values | |
| Multilayer perceptron | Fully connected neural networks | |
| Deep neural network | Originated from multilayer perceptron, uses different architectures and methods of weight sharing | |
| Convolutional neural networks | ResNet | Uses shared weights to exploit translation invariance in convolution approach. ResNet uses skip connections, U-Net architecture is used to label pixels of an image, widely used for image segmentation |
| U-Net | ||
| Recurrent neural networks | GRU | Uses temporal dynamic behavior of data with recursive operations. It is efficient to carry the useful information over a long time using GRU and LSTM mechanisms. Transformers also process the temporal input data using attention mechanism, which allows it to handle very large temporal data. |
| LSTM | ||
| Transformers | ||
| Generative models | Auto-encoders, GANs | Generative models are based on approach to learn the distribution of given dataset, which can be used to reduce the dimension of datasets (eg, auto-encoders) or to generate new dataset (GANs) |
GRU = gated recurrent units; LSTM = long short-term memory; GANs = generative adversarial networks.
Figure 1Summary of machine learning (ML) for atrial fibrillation (AF) screening. There are 2 main categories: AF risk prediction and automated AF diagnosis. AF risk prediction involves the application of ML to electronic health records, normal sinus rhythm ambulatory electrocardiograms (ECGs), and normal sinus rhythm 12-lead ECGs to identify the target group for further screening. For automated AF diagnosis, ML is applied to different rhythm monitoring modalities to increase efficiency and the speed of AF diagnosis. BCG = ballistocardiography; EHR = electronic health records; NSR = normal sinus rhythm; PPG = photoplethysmography.