| Literature DB >> 35408045 |
Muhammad Umer1,2, Saima Sadiq1, Hanen Karamti3, Walid Karamti4,5, Rizwan Majeed6, Michele Nappi7.
Abstract
The prediction of heart failure survivors is a challenging task and helps medical professionals to make the right decisions about patients. Expertise and experience of medical professionals are required to care for heart failure patients. Machine Learning models can help with understanding symptoms of cardiac disease. However, manual feature engineering is challenging and requires expertise to select the appropriate technique. This study proposes a smart healthcare framework using the Internet-of-Things (IoT) and cloud technologies that improve heart failure patients' survival prediction without considering manual feature engineering. The smart IoT-based framework monitors patients on the basis of real-time data and provides timely, effective, and quality healthcare services to heart failure patients. The proposed model also investigates deep learning models in classifying heart failure patients as alive or deceased. The framework employs IoT-based sensors to obtain signals and send them to the cloud web server for processing. These signals are further processed by deep learning models to determine the state of patients. Patients' health records and processing results are shared with a medical professional who will provide emergency help if required. The dataset used in this study contains 13 features and was attained from the UCI repository known as Heart Failure Clinical Records. The experimental results revealed that the CNN model is superior to other deep learning and machine learning models with a 0.9289 accuracy value.Entities:
Keywords: IoT; deep learning; heart disease; patient’s mortality prediction; smart healthcare
Mesh:
Year: 2022 PMID: 35408045 PMCID: PMC9003513 DOI: 10.3390/s22072431
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Overview the existing studies from the literature.
| Reference | Method | Data | Findings | Limitation |
|---|---|---|---|---|
| [ | Neural Network (NN), SVM, Fuzzy Genetic, CART and Random Forest | Database of heart failure patients 136 records from 90 patients, | CART proved as most appropriate in evaluating heart failure severity and its type | Proposed model did not generalize well due to small sample size. Accuracy result is quite low in severity assessment. |
| [ | Naïve Bayes, KNN, Decision Tree, and Random Forest | Heart disease patient dataset consisting 303 instances obtained from UCI | This paper find the chance of heart disease in patients. | The authors considered 14 attributes out of 76 attributes and results could be improved by applying feature selection. |
| [ | Naïve Bayes, KNN, Decision Tree, Random Forest and HRFLM model(combination of Random Forest and Linear Method) | Four datasets (Cleveland, Hungary, Switzerland, and the VA Long Beach) | The proposed hybrid model predicted heart disease better than machine learning models | More combination of models along with feature selection need to be explored. |
| [ | Multiple Kernel Learning and Adaptive Neuro Fuzzy Inference System | KEGG Metabolic Reaction Network dataset. | Experiments have been applied in a two-fold approach in classifying patients into heart disease and healthy ones. | Very small number of features or parameters are considered in the experiment |
| [ | Fisher ranking method, generalized discriminant analysis(GDA) | NSR-CAD and | Authors proposed noninvasive approach using (GDA) to automatically detect coronary artery disease using heart rate variability signals | Models need to train on a large dataset of heart rate variability signals for generalizability. |
| [ | Ensemble of Random Forest, Gradient Boosting Machine, and XG Boost. | Z-Alizadeh Sani, Statlog, Cleveland, and Hungarian datasets | Used an ensemble model to detect coronary artery disease. | A stacked ensemble of three models also increased the complexity and the cost of the model. |
| [ | Ensemble of BiLSTM, BiGRU and CNN | heart disease dataset | Ensemble learning framework using deep model was applied to deal with the problem of an imbalanced heart disease dataset. | The proposed approach has not tested on a benchmark dataset. |
Figure 1Smart Healthcare Framework to monitor heart-failure patients.
Figure 2IoT-based Sensors to monitor patient health.
Dataset Specifications.
| Sr No. | Attributes | Description | Range | Measured In |
|---|---|---|---|---|
| 1 | Time | Followup period | 4–285 | Days |
| 2 | Event (target) | If the patient died in the followup time | 0,1 | Boolean |
| 3 | Gender | Man or woman | 0,1 | Binary |
| 4 | Smoking | If the patient smokes | 0,1 | Boolean |
| 5 | Diabetics | If the patient has diabetics | 0,1 | Boolean |
| 6 | B.P | If the patient has blood pressure issue | 0,1 | Boolean |
| 7 | Anaemia | Decrease in red blood cell or haemoglobin | 0,1 | Boolean |
| 8 | Age | Age of the patient | 40–95 | Years |
| 9 | Ejection fraction | Percentage of blood leaving the heart at each concentration | 14–80 | Percentage |
| 10 | Sodium | Level of sodium in the blood | 114–148 | mEq/L |
| 11 | Creatinine | Level of creatinine in the blood | 0.50–9.40 | mg/dL |
| 12 | Platelets | Platelets in blood | 25.01–850.00 | kiloplatelets/mL |
| 13 | CPK (creatinine Phospho) | Level of CPK enzyme in the blood | 23-7861 | Mcg/L |
Figure 3CNN-based Deep Learning Architecture.
Figure 4Layered Architecture of the Proposed CNN neural network.
Summary of hyper-parameter values for CNN.
| Parameter | Value |
|---|---|
| Embedding dimension | 300 |
| Batch size | 256 |
| Pooling | 2 × 2 |
| No. of filters | 5 × 64 |
| Max_Sequence_length | 130 |
| Epochs | 25 |
| Optimizer | Adam |
| Function | Binary cross entropy |
Performance Evaluation Parameters.
| 1 | |
| 2 | |
| 3 | |
| 4 |
TP = True Positive, TN = True Negative, FP = False Positive, FN = False Negative.
Performance comparison of the proposed CNN deep neural network models with RF & ETC [16].
| Model | Accuracy | Precision | Recall | F-Score |
|---|---|---|---|---|
| CNN | 0.9289 | 0.94 | 0.94 | 0.94 |
| MLP | 0.9201 | 0.93 | 0.92 | 0.93 |
| RNN | 0.9001 | 0.88 | 0.90 | 0.89 |
| LSTM | 0.9169 | 0.92 | 0.92 | 0.92 |
| RF without SMOTE [ | 0.8889 | 0.89 | 0.89 | 0.89 |
| ETC with SMOTE [ | 0.9262 | 0.93 | 0.93 | 0.93 |
Accuracy comparison of deep learning models with machine learning models using significant features after applying SMOTE.
| Models | Accuracy |
|---|---|
| DT [ | 0.8778 |
| AdaBoost [ | 0.8852 |
| LR [ | 0.8442 |
| SGD [ | 0.5491 |
| RF [ | 0.9188 |
| GBM [ | 0.8852 |
| ETC [ | 0.9262 |
| GNB [ | 0.7540 |
| SVM [ | 0.7622 |
| RNN | 0.9001 |
| LSTM | 0.9169 |
| MLP | 0.9201 |
| CNN | 0.9289 |
Performance comparison of the proposed CNN deep neural network models with Transfer learning models.
| Model | Accuracy | Precision | Recall | F-Score |
|---|---|---|---|---|
| VGG16 | 0.9129 | 0.90 | 0.92 | 0.91 |
| AlexNet | 0.9071 | 0.90 | 0.90 | 0.90 |
| CNN | 0.9289 | 0.94 | 0.94 | 0.94 |
Statistics for required training time for classifiers.
| Model | Training Time |
|---|---|
| Proposed approach | 35 min |
| VGG16 | 39 min |
| AlexNet | 47 min |
Ten fold cross-validation results using CNN deep model.
| Fold Number | Accuracy | Precision | Recall | F-Score |
|---|---|---|---|---|
| 1st-Fold | 0.915 | 0.916 | 0.921 | 0.911 |
| 2nd-Fold | 0.912 | 0.907 | 0.922 | 0.926 |
| 3rd-Fold | 0.911 | 0.923 | 0.923 | 0.934 |
| 4th-Fold | 0.918 | 0.907 | 0.949 | 0.935 |
| 5th-Fold | 0.904 | 0.911 | 0.948 | 0.933 |
| 6th-Fold | 0.916 | 0.926 | 0.947 | 0.932 |
| 7th-Fold | 0.924 | 0.907 | 0.916 | 0.941 |
| 8th-Fold | 0.914 | 0.914 | 0.945 | 0.937 |
| 9th-Fold | 0.902 | 0.924 | 0.954 | 0.918 |
| 10th-Fold | 0.947 | 0.945 | 0.957 | 0.949 |
|
|
|
|
|
|
Figure 5Training and Testing accuracy on heart failure-clinical-record dataset.
Comparison between the related work and the proposed model.
| Work | Features > 5 | Monthly Patient Record | AI | IoT |
|---|---|---|---|---|
| [ | × | × | ✓ | ✓ |
| [ | × | × | ✓ | ✓ |
| [ | × | × | ✓ | ✓ |
| [ | × | × | ✓ | ✓ |
| [ | × | × | ✓ | ✓ |
| [ | × | × | × | ✓ |
| [ | × | × | × | ✓ |
| [ | × | × | × | ✓ |
|
| ✓ | ✓ | ✓ | ✓ |
Comparison of the proposed approach with the existing literature.
| Authors | Models | Accuracy |
|---|---|---|
| Parthiban et al. [ | Naïve Bayes | 0.74 |
| Kumar Dwivedi [ | Logistic regression | 0.85 |
| Vembandasamy et al. [ | Naïve Bayes | 0.86 |
| Shah et al. [ | K-NN | 0.90 |
|
|
|
|