| Literature DB >> 33267060 |
Gabriel Martin Bellino1, Luciano Schiaffino1,2, Marisa Battisti1,3, Juan Guerrero4, Alfredo Rosado-Muñoz4.
Abstract
Deep Brain Stimulation (DBS) of the Subthalamic Nuclei (STN) is the most used surgical treatment to improve motor skills in patients with Parkinson's Disease (PD) who do not adequately respond to pharmacological treatment, or have related side effects. During surgery for the implantation of a DBS system, signals are obtained through microelectrodes recordings (MER) at different depths of the brain. These signals are analyzed by neurophysiologists to detect the entry and exit of the STN region, as well as the optimal depth for electrode implantation. In the present work, a classification model is developed and supervised by the K-nearest neighbour algorithm (KNN), which is automatically trained from the 18 temporal features of MER registers of 14 patients with PD in order to provide a clinical support tool during DBS surgery. We investigate the effect of different standardizations of the generated database, the optimal definition of KNN configuration parameters, and the selection of features that maximize KNN performance. The results indicated that KNN trained with data that was standardized per cerebral hemisphere and per patient presented the best performance, achieving an accuracy of 94.35% (p < 0.001). By using feature selection algorithms, it was possible to achieve 93.5% in accuracy in selecting a subset of six features, improving computation time while processing in real time.Entities:
Keywords: K-nearest neighbour-KNN algorithm; Parkinson’s disease; deep brain stimulation-DBS; feature selection; microelectrode registers-MER
Year: 2019 PMID: 33267060 PMCID: PMC7514830 DOI: 10.3390/e21040346
Source DB: PubMed Journal: Entropy (Basel) ISSN: 1099-4300 Impact factor: 2.524
Figure 1On the left, three-dimensional (3-D) view of the brain structure with microelectrode recordings (MER) trajectories to target marked. On the right, neural activity registered of different subcortical structures as the MER descends into the brain.
Average values for accuracy (ACC), specificity (ESP), sensitivity (SEN), area under the ROC curve (AUC), and index of diagnosis (DOR) performance indices of the four versions of proposed k-nearest neighbors (KNN) classifiers.
| KNN Version | ACC | ESP | SEN | AUC | DOR |
|---|---|---|---|---|---|
| KNN | 0.8194 ± 0.0074 | 0.7863 ± 0.0114 | 0.8499 ± 0.0084 | 0.9028 ± 0.0048 | 21.9095 ± 2.1969 |
| KNN_STA | 0.8563 ± 0.0058 | 0.8299 ± 0.0090 | 0.8807 ± 0.0067 | 0.9230 ± 0.0033 | 36.1316 ± 3.5980 |
| KNN_PAT | 0.9358 ± 0.0033 | 0.9344 ± 0.0041 | 0.9371 ± 0.0064 | 0.9761 ± 0.0021 | 213.8659 ± 24.6348 |
| KNN_HEM | 0.9435 ± 0.0022 | 0.9422 ± 0.0042 | 0.9446 ± 0.0049 | 0.9815 ± 0.0019 | 279.8760 ± 22.9661 |
Average values for training and validation in seconds of the four versions of the proposed KNN classifiers.
| KNN Version | t_Train | t_Validation |
|---|---|---|
| KNN | 0.0514 ± 0.0579 | 0.4044 ± 0.0221 |
| KNN_STA | 0.0369 ± 0.0022 | 0.4093 ± 0.0306 |
| KNN_PAT | 0.0349 ± 0.0020 | 0.4037 ± 0.0296 |
| KNN_HEM | 0.0357 ± 0.0038 | 0.4028 ± 0.0283 |
Figure 2ROC Curves for the 4 versions of the KNN algorithm.
p-values of the Friedman and Nememyi test. In bold, statistically significant differences.
| KNN versions Compared | ACC | ESP | SEN | AUC | DOR |
|---|---|---|---|---|---|
| Friedman Test |
|
|
|
|
|
| KNN vs. KNN_STA | 0.1701 | 0.1701 | 0.1701 | 0.1701 | 0.1701 |
| KNN vs. KNN_PAT |
|
|
|
|
|
| KNN vs. KNN_HEM |
|
|
|
|
|
| KNN_STA vs. KNN_PAT | 0.1701 | 0.1701 | 0.1243 | 0.1701 | 0.1701 |
| KNN_STA vs. KNN_HEM |
|
|
|
|
|
| KNN_PAT vs. KNN_HEM | 0.1701 | 0.1701 | 0.2945 | 0.1701 | 0.1701 |
Selection for each algorithm.
| RS | BS | FS | BBS |
|---|---|---|---|
| 4-CL | 1-VAB | 2-RMS | 4-CL |
| 5-TH | 2-RMS | 3-kur | 5-TH |
| 6-PK | 3-kur | 4-CL | 6-PK |
| 8-ZC | 4-CL | 5-TH | 8-ZC |
| 12-SC | 5-TH | 6-PK | 13-SMAD |
| 13-SMAD | 6-PK | 7-NE | 14-SCR |
| 14-SCR | 7-NE | 8-ZC | |
| 8-ZC | 9-SBI | ||
| 9-SBI | 12-SC | ||
| 12-SC | 13-SMAD | ||
| 13-SMAD | 14-SCR | ||
| 14-SCR |
Measure for each model with feature selection. In bold, the highest values that were obtained for each indicator.
| Performance Measures | KNN + RS | KNN + BS | KNN + FS | KNN + BBS | KNN |
|---|---|---|---|---|---|
| Accuracy (%) | 93.43 ± 0.36 |
| 95.42 ± 0.36 | 93.50 ± 0.34 | 94.35 ± 0.22 |
| Specificity (%) | 93.45 ± 0.55 | 95.21 ± 0.35 |
| 93.27 ± 0.51 | 94.23 ± 0.42 |
| Sensitivity (%) | 93.40 ± 0.65 |
| 95.60 ± 0.56 | 93.72 ± 0.55 | 94.46 ± 0.49 |
| AUC (%) | 97.50 ± 0.23 |
| 98.67 ± 0.17 | 97.58 ± 0.19 | 98.15 ± 0.19 |
p-values of Friedman and Nemenyi tests. Nemenyi test compares the classifiers by pairs including the models with all features as those resulting from the selection of features, as described in Section 2.6. In bold, statistically significant differences.
| KNN Versions Compared | ACC | ESP | SEN | AUC |
|---|---|---|---|---|
| Friedman Test |
|
|
|
|
| KNN+RS vs. KNN+BS |
|
|
|
|
| KNN+RS vs. KNN+FS |
|
|
|
|
| KNN+RS vs. KNN+BBS | 0.996 | 0.974 | 0.952 | 0.875 |
| KNN+RS vs. KNN | 0.055 | 0.164 | 0.133 |
|
| KNN+BS vs. KNN+FS | 0.989 | 1.000 | 0.975 | 0.999 |
| KNN+BS vs. KNN+BBS |
|
|
|
|
| KNN+BS vs. KNN |
| 0.069 |
| 0.118 |
| KNN+FS vs. KNN+BBS |
|
|
|
|
| KNN+FS vs. KNN | 0.153 | 0.094 | 0.074 | 0.065 |
| KNN+BBS vs. KNN | 0.134 | 0.035 | 0.485 | 0.251 |