| Literature DB >> 35087583 |
Rasha M Abd El-Aziz1, Rayan Alanazi1, Osama R Shahin1, Ahmed Elhadad1, Amr Abozeid1, Ahmed I Taloba1, Riyad Alshalabi2.
Abstract
Patients are required to be observed and treated continually in some emergency situations. However, due to time constraints, visiting the hospital to execute such tasks is challenging. This can be achieved using a remote healthcare monitoring system. The proposed system introduces an effective data science technique for IoT supported healthcare monitoring system with the rapid adoption of cloud computing that enhances the efficiency of data processing and the accessibility of data in the cloud. Many IoT sensors are employed, which collect real healthcare data. These data are retained in the cloud for the processing of data science. In the Healthcare Monitoring-Data Science Technique (HM-DST), initially, an altered data science technique is introduced. This algorithm is known as the Improved Pigeon Optimization (IPO) algorithm, which is employed for grouping the stored data in the cloud, which helps in improving the prediction rate. Next, the optimum feature selection technique for extraction and selection of features is illustrated. A Backtracking Search-Based Deep Neural Network (BS-DNN) is utilized for classifying human healthcare. The proposed system's performance is finally examined with various healthcare datasets of real time and the variations are observed with the available smart healthcare systems for monitoring.Entities:
Mesh:
Year: 2022 PMID: 35087583 PMCID: PMC8789444 DOI: 10.1155/2022/7425846
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1IoT system components.
Figure 2Working process of the proposed HM-DST mechanism.
Algorithm 1Grouping of cloud data with IPO.
Figure 3Flow chart of the proposed model.
Total square root error for various iterations.
| Momentum | Learning rate | Total square error | |||||
|---|---|---|---|---|---|---|---|
| 1000 iterations | 2000 iterations | 3000 iterations | 4000 iterations | 5000 iterations | 6000 iterations | ||
| 0.05 | 0.95 | 5.862 | 4.562 | 5.013 | 3.124 | 1.916 | 1.714 |
| 0.15 | 0.85 | 5.825 | 4.213 | 3.119 | 4.814 | 1.016 | 1.014 |
| 0.25 | 0.75 | 5.482 | 5.702 | 4.216 | 4.542 | 2.819 | 1.891 |
| 0.35 | 0.65 | 5.989 | 5.271 | 4.582 | 4.819 | 2.904 | 2.919 |
| 0.45 | 0.55 | 5.922 | 5.535 | 4.846 | 4.932 | 2.819 | 3.861 |
| 0.55 | 0.45 | 6.253 | 5.583 | 5.056 | 4.829 | 2.881 | 3.901 |
| 0.65 | 0.35 | 5.216 | 5.288 | 5.622 | 4.916 | 3.908 | 2.808 |
| 0.75 | 0.25 | 6.249 | 5.313 | 5.332 | 5.210 | 4.560 | 4.571 |
| 0.85 | 0.15 | 7.186 | 6.123 | 6.115 | 5.617 | 4.619 | 5.851 |
| 0.95 | 0.05 | 8.214 | 7.284 | 7.518 | 6.181 | 5.189 | 5.957 |
Comparison of the results of the proposed and existing classifiers.
| Metrics in % | BS-DNN | DNN | LR | MLP | KNN | ANN |
|---|---|---|---|---|---|---|
| Accuracy | 93.48 | 93.5 | 92.6 | 92.2 | 91.53 | 90.13 |
| Sensitivity | 93.26 | 91.22 | 89.73 | 89.35 | 89.28 | 89.22 |
| Mean absolute error | 83.53 | 82.16 | 81.06 | 79.99 | 79.96 | 79.06 |
|
| 97.32 | 96.52 | 96.32 | 96.10 | 96.90 | 94.41 |
| Precession | 94.21 | 92.91 | 91.9 | 92.6 | 92.4 | 89.26 |
| Specificity | 95.71 | 95.36 | 95.03 | 94.41 | 93.25 | 93.22 |
| Recall | 96.41 | 92.16 | 80.99 | 89.41 | 89.26 | 89.13 |
Figure 4Comparison of the performances of the proposed and existing classifiers.