| Literature DB >> 35009666 |
Ariyo Oluwasanmi1, Muhammad Umar Aftab2, Edward Baagyere1, Zhiguang Qin1, Muhammad Ahmad2, Manuel Mazzara3.
Abstract
Today, accurate and automated abnormality diagnosis and identification have become of paramount importance as they are involved in many critical and life-saving scenarios. To accomplish such frontiers, we propose three artificial intelligence models through the application of deep learning algorithms to analyze and detect anomalies in human heartbeat signals. The three proposed models include an attention autoencoder that maps input data to a lower-dimensional latent representation with maximum feature retention, and a reconstruction decoder with minimum remodeling loss. The autoencoder has an embedded attention module at the bottleneck to learn the salient activations of the encoded distribution. Additionally, a variational autoencoder (VAE) and a long short-term memory (LSTM) network is designed to learn the Gaussian distribution of the generative reconstruction and time-series sequential data analysis. The three proposed models displayed outstanding ability to detect anomalies on the evaluated five thousand electrocardiogram (ECG5000) signals with 99% accuracy and 99.3% precision score in detecting healthy heartbeats from patients with severe congestive heart failure.Entities:
Keywords: anomaly detection; attention module; autoencoder; long short-term memory (LSTM); variational autoencoder (VAE)
Mesh:
Year: 2021 PMID: 35009666 PMCID: PMC8747546 DOI: 10.3390/s22010123
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Figure 1Temporal distribution of normal and abnormal heartbeat from the ECG5000 dataset. Left column: Normal heartbeat; Middle column: Anomalous heartbeat; Right column: Combination of normal and anomalous heartbeats.
Figure 2Spatial distribution of data points outlier and latent representation visualization.
Figure 3Architecture of the autoencoder model with the encoder, attention module, and the decoder.
Figure 4Architectural procedure for training the autoencoder model.
Figure 5LSTM framework displaying hidden state of the time steps.
Comparison of results performance on the ECG5000 test dataset.
| Model |
|
|
|
|
|---|---|---|---|---|
| Hierarchical [ | 0.955 | 0.946 | 0.958 | 0.946 |
| Spectral [ | 0.958 | 0.951 | 0.947 | 0.947 |
| Val-thresh [ | 0.968 | - | - | 0.957 |
| VRAE+Wasserstein [ | 0.951 | - | - | 0.946 |
| VRAE + k-Means [ | 0.959 | - | - | 0.952 |
| VAE | 0.952 | 0.925 | 0.984 | 0.954 |
| AE-Without-Attention | 0.97 | 0.955 | 0.988 | 0.971 |
| CAT-AE | 0.972 | 0.956 | 0.992 | 0.974 |
| LSTM | 0.990 | 0.989 | 0.993 | 0.991 |
Figure 6Confusion matrix of the LSTM-predicted result on the test set.
Comparison of results performance on the ECG5000 validation set.
| Model |
|
|
|
|
|---|---|---|---|---|
| VAE | 0.948 | 0.932 | 0.978 | 0.954 |
| AE-Without-Attention | 0.956 | 0.950 | 0.970 | 0.960 |
| CAT-AE | 0.958 | 0.946 | 0.977 | 0.946 |
| LSTM | 0.984 | 0.998 | 0.973 | 0.986 |