| Literature DB >> 35890963 |
Jorge Antonio Orozco Torres1, Alejandro Medina Santiago2, José Manuel Villegas Izaguirre3,4, Monica Amador García5, Alberto Delgado Hernández3,4.
Abstract
This paper presents the development of a multilayer feed-forward neural network for the diagnosis of hypertension, based on a population-based study. For the development of this architecture, several physiological factors have been considered, which are vital to determining the risk of being hypertensive; a diagnostic system can offer a solution which is not easy to determine by conventional means. The results obtained demonstrate the sustainability of health conditions affecting humanity today as a consequence of the social environment in which we live, e.g., economics, stress, smoking, alcoholism, drug addiction, obesity, diabetes, physical inactivity, etc., which leads to hypertension. The results of the neural network-based diagnostic system show an effectiveness of 90%, thus generating a high expectation in diagnosing the risk of hypertension from the analyzed physiological data.Entities:
Keywords: artery hypertension; backpropagation neuronal network; health diagnosis; public health; sustainability
Mesh:
Year: 2022 PMID: 35890963 PMCID: PMC9316039 DOI: 10.3390/s22145272
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Sequence of the procedure for taking the medical variables to be used.
Limits considered in each medically permissible measurement process.
| Measurement Type | Type | Limits |
|---|---|---|
| Glucose | No diabetes | Before meals 70–110 mg/gL |
| After meal < 140 mg/gL | ||
| With diabetes | Before meals 80–130 mg/gL | |
| After meal < 180 mg/gL | ||
| Blood Pressure | Systolic (Highest value) | 90 or lower hypotension |
| 91 to 119 normal | ||
| between 120 and 129 high | ||
| between 130 and 139 stage 1 hypertension | ||
| 140 or higher stage 2 hypertension | ||
| greater than 180 hypertensive crisis | ||
| Diastolic (lowest value) | 60 or lower hypotension | |
| 61 to 79 normal | ||
| and less than 80 high | ||
| or between 80 and 89 stage 1 hypertensions | ||
| or 90 or greater stage 2 hypertension | ||
| greater than 120 hypertensive crisis | ||
| Weight | Variations in observation may be due to sex, age of the individual and many other factors. | |
| Body Mass Index (Quetelet Index) | BMI = Weight (Kg)/Height (m) | Numbers less than 18 indicate low weight. |
| Figures between 18 and 24.9 indicate normal weight. | ||
| Figures between 25 and 26.9 indicate overweight. | ||
| Figures between 27 and 40 indicate | ||
| varying degrees of obesity. | ||
Figure 2Generic model of artificial neuron.
Figure 3Multilayer perceptron architecture with backpropagation algorithm.
Figure 4Multilayer feed-forward neural network.
Sample of data analyzed in the project.
| Age | DxTx | FC | FR | Temp | Systolic | Diastolic | Smoke | BMI |
|---|---|---|---|---|---|---|---|---|
| 20 | 118 | 75 | 21 | 37 | 122 | 86 | 1 | 34.6 |
| 21 | 174 | 84 | 15 | 36.7 | 94 | 76 | 0 | 24.7 |
| 22 | 124 | 86 | 18 | 36.7 | 112 | 85 | 0 | 30.7 |
| 22 | 160 | 88 | 18 | 36.9 | 111 | 78 | 1 | 23.4 |
| 19 | 112 | 58 | 17 | 36 | 112 | 75 | 0 | 22.7 |
| 23 | 125 | 63 | 13 | 37 | 115 | 82 | 0 | 28.8 |
| 21 | 127 | 78 | 15 | 35.6 | 135 | 87 | 0 | 25.3 |
| 18 | 103 | 77 | 18 | 36.4 | 130 | 107 | 1 | 19.4 |
| 24 | 147 | 79 | 17 | 36.2 | 116 | 78 | 0 | 19.7 |
Data classification.
| Ranking | Expected Value | Simulated Value |
|---|---|---|
| Class 1 | 0.25 | 0.2819 |
| Class 2 | 0.50 | 0.3721 |
| Class 3 | 0.75 | 0.7439 |
| Class 4 | 1 | 0.8682 |
Figure 5(a) shows the training of the neural network, (b) shows the state of the network training, and (c) shows the behavior of the network in its training with the regression method.
Figure 6Diagnostic system pseudocode.
Figure 7Age of the population analyzed.
Record of student information.
| Glucose | Heart | Respiratory | Blood | ||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| No. | Sex | Age | DxTx | FC | FR | Temp °C | n0, s1 | Systolic | Diastolic | smoke | IMC |
| 1 | H | 20 | 118 | 75 | 21 | 37 | 122/86 | 122 | 86 | Si | 34.6 |
| 2 | M | 21 | 174 | 84 | 15 | 36.7 | 94/76 | 94 | 76 | No | 24.7 |
| 3 | H | 22 | 124 | 86 | 18 | 36.7 | 112/85 | 112 | 85 | No | 30.7 |
| 4 | M | 22 | 160 | 88 | 18 | 36.9 | 111/78 | 111 | 78 | Si | 23.4 |
| 5 | H | 19 | 112 | 58 | 17 | 36 | 112/75 | 112 | 75 | No | 22.7 |
| 6 | H | 23 | 125 | 63 | 13 | 37 | 115/82 | 115 | 82 | No | 28.8 |
| 7 | M | 21 | 127 | 78 | 15 | 35.6 | 135/87 | 135 | 87 | No | 25.3 |
| 8 | H | 18 | 103 | 77 | 18 | 36.4 | 130/107 | 130 |
| Si | 19.4 |
| 9 | M | 24 | 147 | 79 | 17 | 36.2 | 116/78 | 116 | 78 | No | 19.7 |
| 10 | H | 21 | 130 | 66 | 19 | 36.3 | 129/66 | 129 | 66 | Si | 19.7 |
| 11 | H | 20 | 116 | 98 | 17 | 32.6 | 140/118 | 140 |
| No | 35.9 |
| 12 | H | 22 | 122 | 91 | >12 | 36.7 | 127/92 | 127 |
| No | 27.7 |
Blood pressure category, defined by Binu, D.; Rajakumar, B., in [25].
| BLOOD PRESSURE CATEGORY | SYSTOLIC mm Hg | DIASTOLIC mm Hg | |
|---|---|---|---|
| NORMAL | LESS THAN 120 | AND | LESS THAN 80 |
| ELEVATED | 120–129 | AND | LESS THAN 80 |
| HIGH BLOOD PRESSURE | 120–139 | OR | 80–89 |
| HIGH BLOOD PRESSURE | 140 OR HIGHER | OR | 90 OR HIGHER |
| HIGH BLOOD CRISIS | HIGHER THAN 180 | AND/OR | HIGHER THAN 120 |
Analyzed information.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 133 | F | N | N | 3 | 60 | 159 | 56 | 3 | 1 | 2 | 0 | 35 | 2 | 2 | 2 | 2 | 2 |
| 115 | M | N | Y | 1 | 55 | 107 | 65 | 1 | 1 | 2 | 0 | 17 | 2 | 2 | 1 | 3 | 2 |
| 140 | M | N | Y | 1 | 18 | 130 | 59 | 2 | 1 | 1 | 0 | 26 | 3 | 2 | 1 | 1 | 3 |
| 132 | M | Y | N | 2 | 19 | 230 | 57 | 3 | 2 | 3 | 1 | 49 | 3 | 3 | 1 | 1 | 2 |
| 133 | M | N | N | 2 | 58 | 201 | 74 | 2 | 1 | 3 | 0 | 25 | 2 | 2 | 1 | 2 | 3 |
| 138 | F | N | N | 3 | 55 | 166 | 167 | 2 | 1 | 1 | 1 | 25 | 2 | 1 | 3 | 2 | 3 |
| 133 | F | Y | N | 1 | 22 | 188 | 66 | 3 | 1 | 3 | 1 | 30 | 3 | 1 | 3 | 1 | 1 |
| 67 | F | Y | N | 3 | 52 | 123 | 67 | 1 | 1 | 2 | 0 | 19 | 2 | 3 | 2 | 3 | 2 |
| 138 | M | Y | N | 1 | 46 | 106 | 73 | 1 | 1 | 3 | 1 | 13 | 2 | 2 | 1 | 2 | 1 |
Blood pressure values.
| Category | Blood Pressure | Systolic Pressure |
|---|---|---|
| Clase_1 | Normal | Minor 120 |
| Clase_2 | High | 120–129 |
| Clase_3 | Stage 1 Hypertension | 130–139 |
| Clase_4 | Stage 2 Hypertension | 140 or higher |
Figure 8Age-DxTx graph.
Figure 9Age–Heart Rate graph.
Figure 10Age–Respiratory Rate graph.
Figure 11Age–Temperature graph.
Figure 12Edad–IMC.
Correlation of variables Class1_Systolic pressure normal.
| Normal | |||||||
|---|---|---|---|---|---|---|---|
| Age | DxTx | FC | FR | Temp °C | IMC | Systolic | |
| Age | 1 | ||||||
| DxTx | 0.042692 | 1 | |||||
| FC | 0.098773 |
| 1 | ||||
| FR | 0.075499 | −0.03204 | 0.083564 | 1 | |||
| Temp °C | −0.01297 | −0.03423 | 0.085965 | 0.077646 | 1 | ||
| IMC | −0.02441 | 0.010178 | −0.00661 |
| −0.02438 | 1 | |
| Systolic | −0.08638 |
| −0.14349 | 0.094649 | −0.01691 |
| 1 |
Values obtained from the ANN with trainlm/newff.
| net=newff(minmax(inputs),[14 50 20 4],{’tansig’,’logsig’,’logsig’,’purelin’},’trainlm’) | |||
|---|---|---|---|
| Tests | Correct | Clasification | |
| Clase_1 | 5 | 3 | 60% |
| 10 | 8 | 80% | |
| 15 | 13 | 87% | |
| Clase_2 | 5 | 4 | 80% |
| 10 | 8 | 80% | |
| 15 | 14 | 93% | |
| Clase_3 | 5 | 3 | 60% |
| 10 | 7 | 70% | |
| 15 | 6 | 40% | |
| Clase_4 | 5 | 5 | 100% |
| 10 | 10 | 100% | |
| 15 | 15 | 100% | |
Values obtained from ANN with feedforwardnet/trainlm.
| net = feedforwardnet([10 25 15 4]){’tansig’,’ tansig ’,’ tansig ’,’ tansig ’},’trainlm’) | |||
|---|---|---|---|
| Tests | Correct | Clasificaction | |
| Clase_1 | 5 | 5 | 100% |
| 10 | 10 | 100% | |
| 15 | 15 | 100% | |
| Clase_2 | 5 | 4 | 80% |
| 10 | 6 | 60% | |
| 15 | 12 | 80% | |
| Clase_3 | 5 | 4 | 80% |
| 10 | 7 | 70% | |
| 15 | 12 | 80% | |
| Clase_4 | 5 | 5 | 100% |
| 10 | 10 | 100% | |
| 15 | 15 | 100% | |
Comparative table of references [30,31,32,33].
| Machine Learning Method | Comments |
|---|---|
| Machine Learning SVM | This article mentions the use of SVM in combination with simple k; implies to obtain a lower order error and determine the tumour region by consolidating the inherent image structure progression [ |
| The Support Vector Machine (SVM) algorithm can be used for classification and regression problems. However, SVMs are quite popular for relatively complex types of small to medium-sized classification datasets. In this algorithm, the data points are separated by a hyperplane and the kernel determines the appearance of the hyperplane. If we plot multiple variables on a normal scatter plot, in many cases, that plot cannot separate two or more classes of data. The kernel of an SVM is an important element, which can convert low dimensional data into a higher dimensional space, ref. [ | |
| Cervical cancer can be diagnosed with the help of algorithms such as decision tree, logistic regression and support vector machine (SVM) [ | |
| Several machine learning classification algorithms have been used in Predictive Model Selection (PMS), namely, support vector machine (SVM), decision tree classifier (DTC), random forest (RF), logistic regression (LR), gradient boosting (GB), XGBoost, adaptive boosting (AB) and k-nearest neighbour (KNN). The authors refer to them theoretically but do not support their efficiency in their implementation [ | |
| The paper presented a skin cancer detection system using a support vector machine (SVM), which helps in early detection of skin cancer disease. They used traditional image processing and feature engineering methods for effective feature selection and support vector machine (SVM) algorithms for feature classification [ | |
| Performance evaluation was carried out using four different classifiers such as decision tree (DT), k-nearest neighbour (KNN), (KNN) tree, boosted decision tree (BT) and SVM. The classification was performed using the relevant vector machine and SVM classifier, which achieved 92.4% [ | |
| Naive Bayes machine learning | This type of machine learning is not mentioned in any of the cited and suggested articles. |
| Machine Learning ANN | The authors conducted a survey-based study on cervical cancer detection, including a performance analysis to determine the accuracy of several distinctive types of architecture in an artificial neural network (ANN), where the ANN was used to identify cancerous, normal and abnormal cells. Ref. [ |
Comparative table of references [34,35,36,37].
| Machine Learning Method | Comments |
|---|---|
| Machine Learning SVM | SVM models are characterized by processing both linear and non-linear data. The model aims to draw decision boundaries between data points of different classes and separate them with the maximum margin [ |
| SVM slightly outperforms ANN in recognition using one dataset. The exact reason for this improvement is difficult to pinpoint and could simply be due to better parameter selection or the diverse and non-linear nature of the dataset, or both. It could also be due to the fact that SVM converges to a global minimum and allows for better noise tolerance. | |
| The vector machine is a very popular supervised machine learning technique (with a predefined target variable) that can be used as a classifier and as a predictor [ | |
| SVM has the highest accuracy 0.9985. This part shows three machine learning algorithms, which are KNN, Bayesian and SVM, applied for the same objective [ | |
| - SVM can achieve better generalization ability in small sample classification tasks, and has been widely used in medicine. | |
| - Theoretically, SVM can achieve optimal classification. | |
| - SVM can be well applied to pattern recognition, time series prediction and regression estimation, among others [ | |
| Naive Bayes machine learning | |
| Naive Bayes is a simple but effective classification technique based on Bayes’ Theorem. It assumes independence between predictors, i.e., the attributes or features must be uncorrelated or unrelated to each other. Even if there is dependence, all these characteristics or attributes contribute independently to the likelihood. | |
| - In ensemble modeling, two or more related but different analytical models are used and their results are combined into a single score. An ensemble of SVM, KNN and ANN have been used to achieve an accuracy of 94.12%. The majority vote-based model as demonstrated by Saba Bashir et al. [ | |
| Machine Learning ANN | |
| A comparison is made between the SVM and ANN method implemented in pattern recognition, specifically in the detection of insects contaminating food [ |