| Literature DB >> 24454536 |
D Alamedine1, M Khalil2, C Marque3.
Abstract
Numerous types of linear and nonlinear features have been extracted from the electrohysterogram (EHG) in order to classify labor and pregnancy contractions. As a result, the number of available features is now very large. The goal of this study is to reduce the number of features by selecting only the relevant ones which are useful for solving the classification problem. This paper presents three methods for feature subset selection that can be applied to choose the best subsets for classifying labor and pregnancy contractions: an algorithm using the Jeffrey divergence (JD) distance, a sequential forward selection (SFS) algorithm, and a binary particle swarm optimization (BPSO) algorithm. The two last methods are based on a classifier and were tested with three types of classifiers. These methods have allowed us to identify common features which are relevant for contraction classification.Entities:
Mesh:
Year: 2013 PMID: 24454536 PMCID: PMC3884970 DOI: 10.1155/2013/485684
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.238
Figure 1(a) Position of the 16 monopolar electrodes [4]. (b) Vbi (i = 1–12) represent the 12 calculated bipolar signals.
Figure 2Wavelet decomposition.
Mean ± standard deviation (STD) of parameters and results of Gaussianity test.
| Parameter | Mean ± STD (pregnancy) | Gaussian | Mean ± STD (labor) | Gaussian |
|---|---|---|---|---|
|
| 0.001 ± 0.01 | N | −0.0001 ± 0.01 | N |
|
| 5.47 ± 0.63 | Y | 5.33 ± 0.52 | Y |
|
| 1.17 ± 0.19 | Y | 1.23 ± 0.18 | Y |
|
| 0.61 ± 0.30 | Y | 0.71 ± 0.29 | Y |
|
| 0.0084 ± 0.0076 | N | 0.0118 ± 0.0114 | Y |
|
| 0.03 ± 0.02 | N | 0.04 ± 0.03 | Y |
|
| 0.07 ± 0.04 | N | 0.13 ± 0.09 | Y |
|
| 0.27 ± 0.11 | N | 0.30 ± 0.10 | Y |
|
| 0.48 ± 0.11 | Y | 0.41 ± 0.13 | Y |
|
| 0.14 ± 0.02 | N | 0.15 ± 0.02 | Y |
|
| 0.16 ± 0.03 | N | 0.17 ± 0.04 | N |
|
| 0.18 ± 0.03 | N | 0.20 ± 0.05 | N |
|
| 0.20 ± 0.04 | N | 0.23 ± 0.07 | N |
|
| 0.22 ± 0.05 | N | 0.27 ± 0.08 | N |
|
| 0.25 ± 0.06 | N | 0.32 ± 0.10 | Y |
|
| 0.29 ± 0.07 | N | 0.37 ± 0.10 | Y |
|
| 0.36 ± 0.09 | N | 0.45 ± 0.12 | Y |
|
| 0.52 ± 0.18 | N | 0.64 ± 0.22 | Y |
|
| 0.30 ± 0.06 | N | 0.35 ± 0.08 | Y |
|
| 0.18 ± 0.07 | N | 0.21 ± 0.10 | N |
Figure 3Color vector representing the distribution of distances between parameters.
Figure 4Selection vector representing the best parameters for the discrimination between pregnancy and labor.
Results of BPSO and SFS on synthetic data. The features marked in bold font, correspond to the discriminating features.
| Classifiers | BPSO (gbest that have best fitness) | SFS (combination of features with minimal error) |
|---|---|---|
| QDA | [ | [ |
| LDA | [1, | [ |
| KNN | [ | [2, |
Comparison between BPSO and SFS. The common features between the subsets obtained from BPSO and SFS by using the three classifiers are marked in bold font.
| Classifier | BPSO (gbest that have best fitness between 200 runs) | SFS (combination of features with minimal error) |
|---|---|---|
| QDA | LE, | TR, LE, SE, |
| LDA | SE, | SE, |
| KNN | LE, SE, | TR, LE, SE, |
Comparison of the percentage of correct classification of the selected features subset by using QDA.
| Selection method | Selected feature subset | Correct classification using QDA |
|---|---|---|
| JD |
| 79.95% |
| SFS with QDA | TR, LE, SE, VarEn, | 87.47% |
| SFS with LDA | SE, VarEn, | 83.71% |
| SFS with KNN | TR, LE, SE, VarEn, | 84.96% |
| BPSO with QDA | LE, VarEn, |
|
| BPSO with LDA | SE, VarEn, | 81.20% |
| BPSO with KNN | LE, SE, VarEn, | 86.22% |
Comparison of the percentage of correct classification of the selected features subset by using LDA.
| Selection method | Selected feature subset | Correct classification using LDA |
|---|---|---|
| JD |
| 81.20% |
| SFS with QDA | TR, LE, SE, VarEn, |
|
| SFS with LDA | SE, VarEn, | 83.71% |
| SFS with KNN | TR, LE, SE, VarEn, |
|
| BPSO with QDA | LE, VarEn, | 82.46% |
| BPSO with LDA | SE, VarEn, | 83.71% |
| BPSO with KNN | LE, SE, VarEn, | 82.46% |
Comparison of the percentage of correct classification of the selected features subset by using KNN.
| Selection method | Selected feature subset | Correct classification using KNN |
|---|---|---|
| JD |
| 78.70% |
| SFS with QDA | TR, LE, SE, VarEn, | 83.71% |
| SFS with LDA | SE, VarEn, | 81.20% |
| SFS with KNN | TR, LE, SE, VarEn, | 83.71% |
| BPSO with QDA | LE, VarEn, |
|
| BPSO with LDA | SE, VarEn, | 81.20% |
| BPSO with KNN | LE, SE, VarEn, | 84.96% |