| Literature DB >> 31737550 |
Razieh Sheibani1, Elham Nikookar1, Seyed Enayatollah Alavi1.
Abstract
BACKGROUND: Parkinson's disease (PD) is the most common destructive neurological disorder after Alzheimer's disease. Unfortunately, there is no specific test such as electroencephalography or blood test for diagnosing the disease. In accordance with the previous studies, about 90% of people with PD have some types of voice abnormalities. Therefore, voice measurements can be used to detect the disease.Entities:
Keywords: Classification; ensemble learning; medical diagnostics; parkinson's disease; voice measurements
Year: 2019 PMID: 31737550 PMCID: PMC6839436 DOI: 10.4103/jmss.JMSS_57_18
Source DB: PubMed Journal: J Med Signals Sens ISSN: 2228-7477
Figure 1Schematic illustration of the proposed method. MLP: Multilayer perceptron, DT: Decision tree, NB: Naive Bayes
Description of performance evaluation metrics
| Metric | Formula | Description |
|---|---|---|
| Sensitivity | TP refers to cases with PD label that are correctly classified as PD | |
| Accuracy | TN refers to cases with healthy label that are correctly classified as healthy controls | |
| Specificity | TN refers to cases with healthy label that are correctly classified as healthy controls |
PD – Parkinson’s disease; TP – True positive; TN – True negative; FP – False positive; FN – False negative
Description of voice frequency characteristics
| Name | Description |
|---|---|
| MDVP: Fo (Hz) | Average vocal fundamental frequency |
| MDVP: Fhi (Hz) | Maximum vocal fundamental frequency |
| MDVP: Flo (Hz) | Minimum vocal fundamental frequency |
| MDVP: Jitter (%) | Fundamental frequency variation measures |
| MDVP: Shimmer | Amplitude variation measures |
| NHR | Ratio of noisetotonal component measures |
| RPDE | Nonlinear dynamical complexity measures |
| DFA | Signal fractal scaling exponent |
| Spread1 | Nonlinear measures of fundamental frequency variation |
| Status | Health status |
Accuracy measures of applying internal classifiers
| k-NN ( | k-NN ( | k-NN ( | SVM | DT | NB | |
|---|---|---|---|---|---|---|
| D1 | 90.6 | 62.5 | 81.2 | 81.2 | 87.5 | 87.5 |
| D2 | 90.6 | 78.1 | 78.1 | 84.3 | 87.5 | 81.2 |
| D3 | 87.5 | 84.3 | 81.2 | 90.6 | 93.7 | 81.2 |
| D4 | 93.7 | 75.0 | 81.2 | 84.3 | 90.6 | 81.2 |
| D5 | 78.1 | 75.0 | 78.1 | 84.3 | 93.7 | 78.1 |
| D6 | 87.5 | 75.0 | 78.1 | 75.0 | 78.1 | 81.2 |
SVM – Support vector machine; DT – Decision tree; NB – Naive Bayes; k-NN – k-nearest neighboring
Specificity measures of applying internal classifiers
| k-NN ( | k-NN ( | k-NN ( | SVM | DT | NB | |
|---|---|---|---|---|---|---|
| D1 | 87.5 | 50.0 | 62.5 | 75.0 | 87.5 | 75.0 |
| D2 | 75.0 | 75.0 | 50.0 | 75.0 | 75.0 | 75.0 |
| D3 | 75.0 | 62.5 | 50.0 | 75.0 | 75.0 | 87.5 |
| D4 | 87.5 | 62.5 | 62.5 | 75.0 | 87.5 | 98.0 |
| D5 | 75.0 | 62.5 | 50.0 | 75.0 | 75.0 | 87.5 |
| D6 | 75.0 | 75.0 | 62.5 | 50.0 | 87.5 | 87.5 |
SVM – Support vector machine; DT – Decision tree; NB – Naive Bayes; k-NN – k-nearest neighboring
Accuracy measures of applying ultimate classifiers
| k-NN ( | k-NN ( | k-NN ( | SVM | DT | NB | |
|---|---|---|---|---|---|---|
| MLP | 78.1 | 78.1 | 71.8 | 90.6 | 84.3 | 90.6 |
| AB | 78.1 | 78.1 | 65.6 | 87.5 | 84.3 | 90.6 |
| Voting | 75.0 | 75.0 | 75.0 | 75.0 | 75.0 | 75.0 |
| RF | 87.5 | 81.2 | 71.8 | 84.3 | 87.5 | 87.5 |
MLP – Multilayer perceptron; AB – AdaBoost; RF – Random forest; SVM – Support vector machine; DT – Decision tree; NB – Naive Bayes; k-NN – k-nearest neighboring
Specificity measures of applying ultimate classifiers
| k-NN ( | k-NN ( | k-NN ( | SVM | DT | NB | |
|---|---|---|---|---|---|---|
| MLP | 75.0 | 75.0 | 37.5 | 75.0 | 75.0 | 75.0 |
| AB | 75.0 | 62.5 | 37.5 | 75.0 | 75.0 | 75.0 |
| Voting | 60.0 | 60.0 | 60.0 | 60.0 | 60.0 | 60.0 |
| RF | 75.0 | 75.0 | 37.5 | 75.0 | 75.0 | 75.0 |
MLP – Multilayer perceptron; AB – AdaBoost; RF – Random forest; SVM – Support vector machine; DT – Decision tree; NB – Naive Bayes; k-NN – k-nearest neighboring
Chart 1Comparison between accuracy measures of ultimate classification stage
Chart 3Comparison between specificity measures of ultimate classification stage
Comparison between the proposed method and other works
| Author | Method | Accuracy (%) |
|---|---|---|
| Gil and Johnson[ | ANN and SVM | 90 |
| Ene[ | IS, MCS and HS | 81 |
| Ullah Khan[ | k-NN, AB and RF | 90.2 |
| Khemphila and Boonjing[ | ANN | 83.3 |
| Ozcift and Gulten[ | Classifier ensemble construction with a rotation forest approach | 87.13 |
| Proposed method | Ensemble based | 90.6 |
ANN – Artificial neural network; SVM – Support vector machine; IS – Incremental search; MCS – Monte Carlo search; HS – Hybrid search; AB – AdaBoost; RF – Random forest; k-NN – k-nearest neighboring
Sensitivity measures of applying internal classifiers
| k-NN ( | k-NN ( | k-NN ( | SVM | DT | NB | |
|---|---|---|---|---|---|---|
| D1 | 91.6 | 66.6 | 87.5 | 83.3 | 87.5 | 91.6 |
| D2 | 95.8 | 79.1 | 87.5 | 87.5 | 91.6 | 83.3 |
| D3 | 91.6 | 91.6 | 91.6 | 95.8 | 98.0 | 79.1 |
| D4 | 95.8 | 79.1 | 87.5 | 87.5 | 91.6 | 75.0 |
| D5 | 79.1 | 79.1 | 87.5 | 87.5 | 98.0 | 75.0 |
| D6 | 91.6 | 75.0 | 83.3 | 83.3 | 75.0 | 79.1 |
SVM – Support vector machine; DT – Decision tree; NB – Naive Bayes; k-NN – k-nearest neighboring
Sensitivity measures of applying ultimate classifiers
| k-NN ( | k-NN ( | k-NN ( | SVM | DT | NB | |
|---|---|---|---|---|---|---|
| MLP | 79.1 | 79.1 | 83.3 | 95.8 | 87.5 | 95.8 |
| AB | 79.1 | 83.3 | 75.0 | 91.6 | 87.5 | 95.8 |
| Voting | 95.0 | 95.0 | 95.0 | 95.0 | 95.0 | 95.0 |
| RF | 91.6 | 83.3 | 83.3 | 83.6 | 91.6 | 91.6 |
MLP – Multilayer perceptron; AB – AdaBoost; RF – Random forest; SVM – Support vector machine; DT – Decision tree; NB – Naive Bayes; k-NN – k-nearest neighboring