| Literature DB >> 30853905 |
Hyeonseok Kim1, Natsue Yoshimura2,3, Yasuharu Koike2.
Abstract
Many previous studies on brain-machine interfaces (BMIs) have focused on electroencephalography (EEG) signals elicited during motor-command execution to generate device commands. However, exploiting pre-execution brain activity related to movement intention could improve the practical applicability of BMIs. Therefore, in this study we investigated whether EEG signals occurring before movement execution could be used to classify movement intention. Six subjects performed reaching tasks that required them to move a cursor to one of four targets distributed horizontally and vertically from the center. Using independent components of EEG acquired during a premovement phase, two-class classifications were performed for left vs. right trials and top vs. bottom trials using a support vector machine. Instructions were presented visually (test) and aurally (condition). In the test condition, accuracy for a single window was about 75%, and it increased to 85% in classification using two windows. In the control condition, accuracy for a single window was about 73%, and it increased to 80% in classification using two windows. Classification results showed that a combination of two windows from different time intervals during the premovement phase improved classification performance in the both conditions compared to a single window classification. By categorizing the independent components according to spatial pattern, we found that information depending on the modality can improve classification performance. We confirmed that EEG signals occurring during movement preparation can be used to control a BMI.Entities:
Keywords: brain-machine interface (BMI); classification; electroencephalography (EEG); independent component analysis; premovement
Year: 2019 PMID: 30853905 PMCID: PMC6395380 DOI: 10.3389/fnhum.2019.00063
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
Figure 1Experimental design. A target appeared at one of four positions distributed 4 cm from the center in the horizontal and vertical directions. Each trial consists of three phases. When a trial started, nothing appeared on the screen (Standby), and subjects waited for the next phase. Next, a cursor and target appeared on the screen, and subjects prepared for movement execution (Premovement). When the color of the markers changed to black, subjects moved the cursor from the center to the target using the touchpad (Execution). Three windows from the premovement phase were used for analysis (F: window starting at onset of the premovement phase, M: window starting 1 s after onset of the premovement phase, B: window before the execution phase). Four sizes were used for each window (0.5 s, 1.0 s, 1.5 s, and 2.0 s).
Figure 2Independent components regarded as eye movement artifacts for subject 1.
Figure 3Scalp maps of independent components categorized according to area of peak activity for all subjects.
Figure 4Scalp maps of independent components categorized according to area of peak activity for all subjects (control).
Classification accuracies for left vs. right.
| Window position [%] | |||||
|---|---|---|---|---|---|
| F | M | B | FB | MB | |
| S1 | 73.72 (45.51) | 73.72 (51.15) | 73.72 (54.56) | 85.90 (52.38) | 85.26 (53.00) |
| S2 | 81.61 (49.46) | 75.86 (49.07) | 77.01 (48.87) | 89.66 (48.01) | 88.51 (54.59) |
| S3 | 70.91 (50.74) | 67.88 (50.58) | 75.15 (50.27) | 85.45 (48.42) | 81.21 (51.35) |
| S4 | 77.08 (48.97) | 77.08 (50.27) | 72.92 (49.87) | 84.03 (49.78) | 86.81 (50.74) |
| S5 | 84.31 (49.05) | 77.45 (49.61) | 87.25 (47.42) | 95.10 (46.28) | 97.06 (51.03) |
| S6 | 71.11 (48.80) | 70.00 (49.40) | 71.11 (50.41) | 78.33 (47.26) | 76.67 (51.47) |
| Mean | 76.46 ± 5.58 | 73.67 ± 3.94 | 76.19 ± 5.77 | 86.41 ± 5.63 | 85.92 ± 6.93 |
Values outside parenthesis are means of the three highest classification accuracies obtained among all independent component pairs. Values in parenthesis are means of accuracies when five inputs in shuffled conditions were fed to model which made values outside parenthesis. S indicates subject.
Classification accuracies for left vs. right (control).
| Window position [%] | |||||
|---|---|---|---|---|---|
| F | M | B | FB | MB | |
| S1 | 68.28 (50.74) | 72.04 (49.73) | 73.66 (51.48) | 75.81 (49.37) | 79.03 (50.33) |
| S2 | 71.43 (48.89) | 70.37 (51.89) | 75.66 (48.53) | 83.07 (48.68) | 81.48 (51.12) |
| S3 | 70.98 (50.47) | 72.55 (50.83) | 68.24 (49.17) | 76.86 (51.93) | 76.86 (46.67) |
| S4 | 71.79 (49.06) | 70.94 (49.34) | 67.09 (50.21) | 73.93 (51.73) | 70.94 (51.21) |
| S5 | 73.56 (48.84) | 72.99 (52.57) | 71.84 (50.13) | 82.76 (49.14) | 84.48 (48.99) |
| S6 | 83.33 (50.32) | 82.29 (49.23) | 85.42 (49.97) | 90.63 (47.46) | 92.71 (47.35) |
| Mean | 73.23 ± 5.24 | 73.53 ± 4.40 | 73.65 ± 6.60 | 80.51 ± 6.21 | 80.92 ± 7.37 |
Values outside parenthesis are means of the three highest classification accuracies obtained among all independent component pairs. Values in parenthesis are means of accuracies when five inputs in shuffled conditions were fed to model which made values outside parenthesis. S indicates subject.
Classification accuracies for top vs. bottom.
| Window position [%] | |||||
|---|---|---|---|---|---|
| F | M | B | FB | MB | |
| S1 | 70.99 (54.00) | 72.84 (46.34) | 74.07 (50.33) | 80.86 (54.11) | 88.27 (52.55) |
| S2 | 86.21 (52.03) | 83.91 (48.89) | 79.31 (46.24) | 93.10 (49.73) | 89.66 (49.12) |
| S3 | 75.33 (50.72) | 74.00 (52.42) | 71.33 (51.31) | 83.33 (53.19) | 86.00 (52.36) |
| S4 | 74.67 (52.03) | 74.67 (50.02) | 72.00 (50.24) | 86.67 (50.63) | 82.67 (52.14) |
| S5 | 77.78 (47.31) | 76.92 (49.44) | 80.34 (51.89) | 93.16 (49.82) | 90.60 (49.65) |
| S6 | 75.69 (54.41) | 73.61 (46.44) | 70.83 (52.62) | 81.25 (54.42) | 75.69 (49.55) |
| Mean | 76.78 ± 5.12 | 75.99 ± 4.12 | 74.65 ± 4.17 | 86.40 ± 5.61 | 85.48 ± 5.58 |
Values outside parenthesis are means of the three highest classification accuracies obtained among all independent component pairs. Values in parenthesis are means of accuracies when five inputs in shuffled conditions were fed to model which made values outside parenthesis. S indicates subject.
Classification accuracies for top vs. bottom (control).
| Window position [%] | |||||
|---|---|---|---|---|---|
| F | M | B | FB | MB | |
| S1 | 69.23 (48.98) | 69.87 (45.88) | 75.64 (49.33) | 78.21 (49.07) | 80.77 (49.94) |
| S2 | 72.58 (49.48) | 71.51 (50.07) | 69.89 (50.21) | 77.42 (48.13) | 77.42 (48.95) |
| S3 | 77.27 (44.44) | 75.76 (47.70) | 72.22 (48.43) | 86.36 (49.56) | 86.36 (47.88) |
| S4 | 67.98 (50.62) | 69.30 (48.55) | 69.74 (49.71) | 70.18 (51.43) | 74.12 (49.31) |
| S5 | 74.81 (50.34) | 76.30 (48.11) | 77.04 (50.01) | 83.70 (48.41) | 82.96 (48.83) |
| S6 | 82.76 (51.92) | 81.61 (45.62) | 82.76 (51.25) | 91.95 (48.33) | 93.10 (51.49) |
| Mean | 74.11 ± 5.46 | 74.06 ± 4.72 | 74.55 ± 5.00 | 81.30 ± 7.66 | 82.46 ± 6.73 |
Values outside parenthesis are means of the three highest classification accuracies obtained among all independent component pairs. Values in parenthesis are means of accuracies when five inputs in shuffled conditions were fed to model which made values outside parenthesis. S indicates subject.
Figure 5Classification accuracies using independent components categorized by spatial pattern. Values depicted are means of the highest accuracies obtained in left vs. right and top vs. bottom classifications averaged across subjects.