| Literature DB >> 31681548 |
Zahra Hoodbhoy1, Mohammad Noman2, Ayesha Shafique2, Ali Nasim2, Devyani Chowdhury3, Babar Hasan1.
Abstract
BACKGROUND: A major contributor to under-five mortality is the death of children in the 1st month of life. Intrapartum complications are one of the major causes of perinatal mortality. Fetal cardiotocograph (CTGs) can be used as a monitoring tool to identify high-risk women during labor. AIM: The objective of this study was to study the precision of machine learning algorithm techniques on CTG data in identifying high-risk fetuses.Entities:
Keywords: Fetal cardiotocography; machine learning; perinatal risk
Year: 2019 PMID: 31681548 PMCID: PMC6822315 DOI: 10.4103/ijabmr.IJABMR_370_18
Source DB: PubMed Journal: Int J Appl Basic Med Res ISSN: 2229-516X
Essential cardiotocogram attributes used in the models
| Variable symbol | Variable description |
|---|---|
| LB | Fetal heart rate baseline (beats per minute) |
| AC | Number of accelerations per second |
| FM | Number of fetal movements per second |
| UC | Number of uterine contractions per second |
| DL | Number of light decelerations per second |
| DS | Number of severe decelerations per second |
| DP | Number of prolonged decelerations per second |
| ASTV | Percentage of time with abnormal short-term variability |
| MSTV | Mean value of short-term variability |
| ALTV | Percentage of time with abnormal long-term variability |
| MLTV | Mean value of long-term variability |
| Width | Width of FHR histogram |
| Min | Minimum of FHR histogram |
| Max | Maximum of FHR histogram |
| Nmax | Number of histogram peaks |
| Nzeros | Number of histogram zeroes |
| Mode | Histogram mode |
| Median | Histogram median |
| Variance | Histogram variance |
| Tendency | Histogram tendency |
| NSP | Fetal state class code (N=Normal, S=Suspected, P=Pathological) |
FHR: Fetal heart rate
Comparison of machine learning models on training data
| ML model | Precision | Recall | F1 score | ||||||
|---|---|---|---|---|---|---|---|---|---|
| N | S | P | N | S | P | N | S | P | |
| MLP | 0.88 | 0.87 | 0.94 | 0.87 | 0.84 | 0.96 | 0.87 | 0.85 | 0.95 |
| XGBoost classifier | 0.99 | 0.96 | 0.996 | 0.97 | 0.987 | 0.992 | 0.976 | 0.975 | 0.994 |
| Decision tree | 0.998 | 1 | 1 | 1 | 0.998 | 1 | 0.999 | 0.999 | 1 |
| Random forest | 0.992 | 0.989 | 0.997 | 0.989 | 0.992 | 0.996 | 0.99 | 0.991 | 0.997 |
| Logistic regression | 0.87 | 0.77 | 0.88 | 0.84 | 0.79 | 0.88 | 0.86 | 0.79 | 0.88 |
| SVM linear kernel | 0.9 | 0.8 | 0.89 | 0.85 | 0.83 | 0.91 | 0.87 | 0.81 | 0.9 |
| SVM RBF kernel | 0.98 | 0.92 | 0.99 | 0.92 | 0.97 | 0.99 | 0.95 | 0.94 | 0.984 |
| KNN | 0.995 | 0.95 | 0.99 | 0.95 | 0.993 | 0.995 | 0.97 | 0.97 | 0.993 |
| Naïve Bayes | 0.88 | 0.66 | 0.86 | 0.76 | 0.88 | 0.68 | 0.82 | 0.75 | 0.76 |
| AdaBoost | 0.86 | 0.88 | 0.988 | 0.89 | 0.88 | 0.95 | 0.87 | 0.88 | 0.97 |
N: Normal state; S: Suspect state; P: Pathological state; MLP: Multilayer perceptron; SVM: Support vector machine; RBF: Radial basis function; KNN: K-nearest neighbors; ML: Machine learning
Comparison of machine learning models on testing data
| ML model | Precision | Recall | F1-Score | ||||||
|---|---|---|---|---|---|---|---|---|---|
| N | S | P | N | S | P | N | S | P | |
| MLP | 0.96 | 0.52 | 0.7 | 0.85 | 0.72 | 0.89 | 0.9 | 0.6 | 0.77 |
| XGBoost classifier | 0.98 | 0.73 | 0.92 | 0.94 | 0.88 | 0.92 | 0.96 | 0.8 | 0.92 |
| Decision tree | 0.96 | 0.74 | 0.87 | 0.95 | 0.74 | 0.92 | 0.95 | 0.74 | 0.89 |
| Random forest | 0.96 | 0.73 | 0.86 | 0.95 | 0.78 | 0.88 | 0.95 | 0.75 | 0.87 |
| Logistic regression | 0.96 | 0.48 | 0.64 | 0.84 | 0.75 | 0.84 | 0.9 | 0.58 | 0.72 |
| SVM linear kernel | 0.97 | 0.49 | 0.68 | 0.84 | 0.79 | 0.88 | 0.9 | 0.6 | 0.76 |
| SVM RBF kernel | 0.98 | 0.62 | 0.84 | 0.91 | 0.82 | 0.88 | 0.94 | 0.7 | 0.86 |
| KNN | 0.96 | 0.6 | 0.82 | 0.9 | 0.76 | 0.87 | 0.93 | 0.66 | 0.84 |
| Naïve Bayes | 0.97 | 0.42 | 0.46 | 0.76 | 0.85 | 0.67 | 0.85 | 0.56 | 0.54 |
| AdaBoost | 0.96 | 0.58 | 0.88 | 0.89 | 0.81 | 0.87 | 0.92 | 0.67 | 0.87 |
N: Normal state; S: Suspect state; P: Pathological state; MLP: Multilayer perceptron; SVM: Support vector machine; RBF: Radial basis function; KNN: K-nearest neighbors; ML: Machine learning
Figure 1Overall accuracy of the different models on the training and testing data