| Literature DB >> 31522355 |
Andreas Schwarz1, Julia Brandstetter1, Joana Pereira1, Gernot R Müller-Putz2.
Abstract
For Brain-Computer interfaces (BCIs), system calibration is a lengthy but necessary process for successful operation. Co-adaptive BCIs aim to shorten training and imply positive motivation to users by presenting feedback already at early stages: After just 5 min of gathering calibration data, the systems are able to provide feedback and engage users in a mutual learning process. In this work, we investigate whether the retraining stage of co-adaptive BCIs can be adapted to a semi-supervised concept, where only a small amount of labeled data is available and all additional data needs to be labeled by the BCI itself. The aim of the current work was to evaluate whether a semi-supervised co-adaptive BCI could successfully compete with a supervised co-adaptive BCI model. In a supporting two-class (190 trials per condition) BCI study based on motor imagery tasks, we evaluated both approaches in two separate groups of 10 participants online, while we simulated the other approach in each group offline. Our results indicate that despite the lack of true labeled data, the semi-supervised driven BCI did not perform significantly worse (p > 0.05) than the supervised counterpart. We believe that these findings contribute to developing BCIs for long-term use, where continuous adaptation becomes imperative for maintaining meaningful BCI performance. Graphical abstract In this work, we investigate whether the retraining stage of a co-adaptive BCI can be adapted to a semi-supervised concept, where only a small amount of labeled data is available and all additional data needs to be labeled by the BCI itself. In two groups of 10 persons, we evaluate a supervised as well as a semi-supervised approach. Our results indicate that despite the lack of true labeled data, the semi-supervised driven BCI did not perform significantly worse (p > 0.05) than the supervised counterpart.Entities:
Keywords: Brain–computer interface (BCI); Co-adaptive BCI; Motor imagery; Semi-supervised learning; Supervised learning
Mesh:
Year: 2019 PMID: 31522355 PMCID: PMC6828633 DOI: 10.1007/s11517-019-02047-1
Source DB: PubMed Journal: Med Biol Eng Comput ISSN: 0140-0118 Impact factor: 2.602
Fig. 1Experimental setup, Paradigm and electrode layout. Top left: paradigm. At second 0, a cross appeared on the screen. At second 3, the cue is presented followed by a 5-s imagery period. Top right: electrode setup. Thirteen electrodes (in red) were used covering the motor cortex. Ground was positioned at AFz (green electrode) and for the reference we used an electrode clip at the left earlobe. Bottom: Experimental timeline. The first 10 TPC were done without feedback for gathering data to allow initial classifier calibration. Thereafter, participants received feedback according to their actions. For the first 40, TPC model calibration was performed incorporating the supervised retraining unit (Initial). Thereafter, we changed the retraining concept for group B to the semi-supervised retraining unit, while we kept the supervised retraining concept for group A.
Fig. 2Power spectral density (PSD) estimate per subject for channels C3, Cz, and C4. Abscissa shows the frequency (Hz) while the ordinate axis reflects the power (dB)
Fig. 3Online accuracies of the single trial classification of both groups. A total of 180 feedback trials per condition were evaluated. Second 0 represent the cue onset. Colored lines show the subject-specific performance results, the black line the grand average over all subjects
Intermediate performance evaluation of all 4 stages of the experiment (no significant difference between stages could be found within an approach and over all stages of both approaches)
|
|
|
| ||
|---|---|---|---|---|
| Peak (%) | Mean (%) | Peak (%) | Mean (%) | |
| Initial (1–40 TPC) | 70.5 | 60.0 | 82.2 | 74.3 |
| Eval 1 (41–90 TPC) | 76.7 | 66.5 | 83.1 | 75.3 |
| Eval 2 (91–140 TPC) | 77.0 | 68.5 | 83.6 | 75.8 |
| Eval 3 (141–180 TPC) | 75.6 | 66.9 | 81.2 | 73.2 |
Fig. 4Model comparison within group. The blue lines show the grand average of all subjects performed in the online part of the experiment. The red lines show the grand average of the offline simulation results of the corresponding model
Number of recurrent retrainings performed per subject
|
|
| ||||
|---|---|---|---|---|---|
|
|
|
|
|
|
|
| S1 | 27 | 8 | S11 | 7 | 23 |
| S2 | 29 | 8 | S12 | 15 | 29 |
| S3 | 31 | 10 | S13 | 13 | 31 |
| S4 | 29 | 8 | S14 | 10 | 32 |
| S5 | 29 | 9 | S15 | 18 | 32 |
| S6 | 22 | 7 | S16 | 8 | 27 |
| S7 | 27 | 9 | S17 | 11 | 28 |
| S8 | 25 | 12 | S18 | 13 | 27 |
| S9 | 28 | 11 | S19 | 12 | 26 |
| S10 | 23 | 8 | S20 | 7 | 32 |
For S1–S10 (group A), we simulated the semi-supervised approach offline, while we simulated the supervised approach for S11–S19 (group B). Due to the more strict selection criterion of the semi-supervised group, the number of retrainings performed in this group was considerably lower
Online and offline performance for both groups (mean accuracy was calculated over the feedback period from second 4 to second 8)
|
|
| ||||||||
|---|---|---|---|---|---|---|---|---|---|
|
|
|
|
|
|
| ||||
| Peak (%) | Mean (%) | Peak (%) | Mean (%) | Peak (%) | Mean (%) | Peak (%) | Mean (%) | ||
| S1 | 60.6 | 55.8 | 62.3 | 54.9 | S11 | 53.3 | 49.0 | 56.8 | 52.6 |
| S2 | 78.1 | 70.2 | 72.4 | 67.6 | S12 | 96.4 | 88.6 | 96.0 | 87.5 |
| S3 | 91.7 | 81.8 | 88.3 | 78.8 | S13 | 60.6 | 56.8 | 63.5 | 58.1 |
| S4 | 59.7 | 52.8 | 55.9 | 50.0 | S14 | 86.9 | 78.9 | 88.0 | 78.7 |
| S5 | 61.9 | 55.0 | 62.5 | 53.5 | S15 | 98.9 | 94.2 | 99.4 | 94.3 |
| S6 | 61.7 | 55.6 | 61.8 | 55.2 | S16 | 70.0 | 60.0 | 71.6 | 66.1 |
| S7 | 55.6 | 51.5 | 56.6 | 50.8 | S17 | 85.6 | 77.2 | 86.3 | 78.4 |
| S8 | 79.7 | 74.2 | 76.8 | 70.5 | S18 | 91.7 | 82.6 | 91.8 | 83.2 |
| S9 | 85.3 | 72.6 | 82.7 | 70.5 | S19 | 89.4 | 78.3 | 88.2 | 77.7 |
| S10 | 84.7 | 76.3 | 83.5 | 75.7 | S20 | 67.5 | 61.0 | 77.2 | 69.6 |
| Average | 71.9 | 64.6 | 70.3 | 62.8 | 80.0 | 72.7 | 81.9 | 74.6 | |