Literature DB >> 35482705

Instant classification for the spatially-coded BCI.

Alexander Maÿe1, Raika Rauterberg1,2, Andreas K Engel1.   

Abstract

The spatially-coded SSVEP BCI exploits changes in the topography of the steady-state visual evoked response to visual flicker stimulation in the extrafoveal field of view. In contrast to frequency-coded SSVEP BCIs, the operator does not gaze into any flickering lights; therefore, this paradigm can reduce visual fatigue. Other advantages include high classification accuracies and a simplified stimulation setup. Previous studies of the paradigm used stimulation intervals of a fixed duration. For frequency-coded SSVEP BCIs, it has been shown that dynamically adjusting the trial duration can increase the system's information transfer rate (ITR). We therefore investigated whether a similar increase could be achieved for spatially-coded BCIs by applying dynamic stopping methods. To this end we introduced a new stopping criterion which combines the likelihood of the classification result and its stability across larger data windows. Whereas the BCI achieved an average ITR of 28.4±6.4 bits/min with fixed intervals, dynamic intervals increased the performance to 81.1±44.4 bits/min. Users were able to maintain performance up to 60 minutes of continuous operation. We suggest that the dynamic response time might have worked as a kind of temporal feedback which allowed operators to optimize their brain signals and compensate fatigue.

Entities:  

Mesh:

Year:  2022        PMID: 35482705      PMCID: PMC9049359          DOI: 10.1371/journal.pone.0267548

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

In current research on brain-computer interfaces (BCIs), the optimal trial duration or data size for recognizing user intent is typically derived from an offline analysis of data recorded from several participants of a study. The optimization finds a trade-off between longer trial durations, in which more data can be recorded and thereby classification accuracy increases, and shorter trials, which increase the number of commands per unit of time. Information transfer rate is a derived performance indicator for BCIs which combines accuracy and duration and therefore is frequently used for the objective function. But of course all other performance measures that have time in the denominator, like output characters per minute or utility [1], increase with shorter trials. In general, the optimal parameters are fixed for each participant or even the studied population after the training phase, but whether or not they are also optimal for the application or online phase is less frequently considered. Fatigue, concentration lapses and other factors affect the brain signal quality, which is why a more fine-grained or dynamic adaptation of trial duration would be desirable for achieving short reaction latencies of a BCI. The abundance of sophisticated methods for improving classification accuracy notwithstanding, reducing the amount of data is an important approach for increasing a BCI’s output rate. Compared to the effort that is being made towards increasing accuracies, surprisingly little research is dedicated to reducing the amount of data needed for classification and hence trial duration. A systematic review of the few existing methods for a “dynamic stopping” of data recording to maximize ITR in P300-based BCIs is given in [2]. The evaluation showed that all methods improved performance by factors of 2 to 4. Almost all of them were robust in the sense that even in the worst case, the performance of the BCI did not drop below that for fixed trial durations. More recently, several dynamic stopping (DS) approaches for BCIs which employ the steady-state visual evoked potential (SSVEP) have been developed. One method is to stop data recording when the SSVEP response strength reaches a threshold [3]. Correlation with a sinusoidal template signal is used as an indicator of response strength, and the correlation strength for the trial duration where ITR peaks (in a calibration data set) is taken as the threshold. Another idea is to output a decision when the class-conditional probabilities reach a given threshold [4]. As classification frequently relies on correlations between class-specific templates and the observed data, the difference of the strongest correlation and the second-best match can be taken as a certainty measure, and a decision can be made when it exceeds a threshold [5]. The threshold can be dynamically determined by converting features of the EEG signal to target probabilities through the softmax function and using them to weigh the cost of collecting more data up against the certainty of a correct classification [6]. Based on a model for the distribution of features when the user is attending to the target and to non-targets, Bayes’ theorem can be used to calculate the posterior probability for a correct decision, and online data recording can be stopped when it crosses a threshold [7]. The thresholds can be specific for each target and determined by the separability of the target and non-target feature distributions [8]. What all these approaches have in common is that the computational overhead of such methods is negligible; therefore, it is somewhat surprising that DS methods did not become a standard component of current BCI paradigms. In SSVEP-based as well as P300-based BCIs, stimulation is cyclic: a brightness or color contrast is switched back and forth to generate a flicker in the former, and a row/column combination is highlighted to elicit a P300 response in the latter. It seems obvious that the most useful time points for evaluating whether a classification can be made are when another cycle in the stimulation is finished. Hence the cycle duration determines the temporal granularity of the dynamic stopping. Lower cycle times or, correspondingly, higher stimulation frequencies afford finer granularity and hence better adaptivity to the signal quality and faster response of the BCI. On the other hand, higher stimulation frequencies are known to elicit weaker SSVEP responses [9-11]. We therefore wanted to explore whether and to which extent shorter stimulation cycles can reduce the BCI’s latency and increase its ITR. To this end, we used a flicker stimulus of 60 Hz. Compared to the alpha and beta frequency bands which are classically used in SSVEP BCIs, neuronal background activity at this frequency is lower, resulting in a similar signal-to-noise ratio and classification accuracy [11, 12]. Since higher stimulation frequencies generally cause less visual fatigue [10, 13], their widespread application in SSVEP BCIs could help improve user comfort. The second innovative aspect of our investigation is the introduction of DS in a BCI paradigm which employs the different topographies of the response evoked by a single flicker that appears at different locations in the visual field [14]. Whereas in the majority of SSVEP BCIs the control channels are encoded in different frequencies and/or phases of a set of flicker stimuli, in this approach different spatial positions relative to a single flicker define the control targets. One important advantage of this approach is that, unlike in frequency-coded SSVEP BCIs in which the user has to gaze at the flicker stimulus, in our spatially-coded SSVEP BCI, the flicker always appears in the extrafoveal field. We have argued that this property likely has advantages with respect to visual fatigue [15]. In addition, the high stimulation frequency that increases the temporal granularity of the dynamic stopping at the same time can also be expected to improve user comfort.

Materials and methods

Participants and EEG recording

Fourteen subjects participated in the study. They were between 21 and 57 years old (mean: 30), and 4 of them were females. Ten subjects had previously participated in BCI studies, and two subjects were among the authors of this study. All of them had normal vision and were free of neurological and ophthalmological disorders. The study was approved by the ethics committee of the medical association of the city of Hamburg, Germany. Informed consent was signed by all participants before commencing the experiment. The experiment took place in a regular lab environment with ambient illumination from ceiling lights and without any electrical or acoustic shielding. EEG was recorded using 32 active electrodes placed according to the 10/20 international system and an ActiveTwo AD-box amplifier (BioSemi Instrumentation, Amsterdam, The Netherlands). The sample rate was 2048 Hz. The labstreaminglayer (https://github.com/sccn/labstreaminglayer/) was used to synchronize EEG data with triggers, write data to a file or stream them into the online processing.

Stimulation and experimental procedure

The stimulation was based on a previous study in which we introduced the concept of the spatially-coded SSVEP BCI [14]. A large disc in the center of the screen (19° visual angle) provided the flicker by flipping from black to white and vice versa, and the 5 targets were arranged on top, below, to the left, to the right and in the center of the disc (see Fig 1). It is important to note that the targets that the user fixated in order to select the associated command were not flickering. Therefore, the static target in the center of the disc effectively made the flickering area an annulus. Matlab (The Mathworks, Natick, MA, USA) and the Psychophysics Toolbox [16-18]were used to generate the stimulation and to control the experiment. We used an EIZO FlexScan F931 CRT monitor at 120 Hz refresh rate for displaying the stimulus, and the viewing distance was 50 cm.
Fig 1

Schema of the visual stimulation and the procedure in the training session.

Participants were instructed to gaze at all five targets in ascending order. The target to be attended was cued for two seconds before fixating it for four seconds. After fixating the five targets in sequence, there was a break of two seconds before the first target of the next sequence was highlighted. Prior to every new sequence, the numbers 1-5 were randomly assigned to the target positions. For the training, the numeric values were not relevant, because the target position to be focused was cued. In the online session, however, there were no cues, and participants had to gaze at targets in ascending numeric order. Hence the randomized assignment during the training familiarized participants with the procedure of the online session. Before starting the experiment, participants were given the opportunity to explore the stimulation and ask questions. They were requested to avoid head movements, eye blinks, or swallowing during a sequence; however, they could do so during the breaks between sequences. The training session comprised 30 sequences. The resulting training data was pre-processed and used to train the online classifier. In the subsequent online session, subjects were instructed to gaze at all five numbers in ascending order. Again, the numbers were distributed randomly after each sequence. This time, the participants had 5 seconds at the beginning of each sequence to memorize the new positions of all numbers. Afterwards, they should keep their gaze fixed at each target until an audible signal triggered them to look at the next one. The pitch of the beep provided feedback on whether the corresponding classification was successful or not. Following the beep, subjects had half a second to adjust their gaze to the next target. The trial length was not fixed in the online session. Instead, the dynamic stopping algorithm (see below) started the next trial whenever a sufficiently reliable decision for the current data window could be made. A schematic of the online task is shown in Fig 2.
Fig 2

Schema of the visual stimulation and the procedure in the online session.

After each sequence, the time needed to complete the five trials was displayed. This feedback should encourage participants to focus and swivel their gaze as swiftly as possible. Five sequences formed one block. Within a block, all sequences followed each other with a five-second break at the beginning to memorize the new number positions. The online session comprised at least 10 blocks. After the tenth block, the participant could continue the experiment ad libitum and try to improve the own performance.

Data analysis

Data were filtered by a zero-phase finite impulse response filter with a 55-65 Hz pass band. The optimal trade-off between low filter orders enabling smaller data windows and high ITRs was determined on the training data from each participant by grid search. A standard canonical correlation analysis (CCA [19]) was then employed to calculate correlations with a reference signal to be used as features for the classification. A detailed description of the classification method is given in [14], but the main idea is reproduced here. CCA determines spatial filters A and B such that the set of correlations r between two multi-variate signals X and Y is maximized: Here, are T samples of an L-channel EEG signal, and Y is a sinusoidal reference signal at the frequency of the flicker stimulation with t = [0…T/f] in steps of the inverse of the sampling frequency f. The canoncorr function in Matlab solves Eq 1 efficiently by QR and singular value decomposition. For each of the C = 5 targets, the corresponding trials from the training session were concatenated along the time dimension, and the class-specific filters A and B were determined. Then each trial was filtered with all the c = 1…C filters, and the resulting canonical correlations were merged into the feature vector f for the trial. Together with the corresponding class labels, these feature vectors comprised the training data for the LDA classifier. This classification method is frequently used in BCI applications, and it performed reasonably well in our previous studies [14, 15]. LDA [20] classifies a feature vector f according to the maximum posterior probability of f belonging to any of the c classes: Using Bayes’ theorem, the posterior probability can be calculated by: where p(c) is the prior probability of class c. The class-conditional probability distribution P(f|c) is estimated using d-dimensional multi-variate Gausians: with the class-centroids μ and the covariance matrix Σ calculated from the training data. The implementation of LDA in Matlab’s classify function was used. To get an estimate of the classification accuracy on the training data, a leave-one-sample-out cross-validation approach was employed. For each trial in the training data, the procedure for calculating feature vectors was repeated, but the respective trial was excluded. The classifier outputs for the excluded trials were used to estimate the offline classification accuracy. ITR was then calculated by: where G is the classification accuracy, C the number of classes and T the time window [21-23]. The optimal fixed data length was determined by varying the data length and finding the maximum of the ITR for each participant.

Classification with dynamic time windows

The critical component of any dynamic stopping method is a strategy for determining the reliability of a classification result. We adopted the simple idea that consecutive classifications on an increasingly larger data window should return the same result. This heuristic has been successfully used in a P300-based BCI spelling application where it increased the bit rate by 20% on average [24]. We found that performance could be further improved by combining it with a threshold on the posterior probability P(c|f) of the winning class from the LDA classifier (Eq 4). This is the equivalent of the Bayes criterion for DS in SSVEP-BCIs that use the magnitude of the correlation coefficients for classification [7, 8, 25]. Hence the classification result is final when a fixed number of N classifications in succession have each yielded the same result with at least a minimum posterior probability of P. The optimal combination of parameters N and P for which the BCI shows the highest ITR were determined by grid search. A schematic of the DS algorithm is shown in Fig 3.
Fig 3

Flow diagram of the dynamic stopping algorithm.

The update frequency for evaluating the stopping criterion in principle is only limited by the sampling rate of the EEG amplifier and the computing power of the machine running the DS algorithm. It turned out, however, that the software interface to the amplifier returned EEG data always in chunks that were multiples of 131 samples. At a sampling rate of 2048 Hz, this corresponds to a maximum update frequency of 15.6 Hz and limits the temporal resolution of the dynamically adjusted trial duration to about 64 ms. The frequency of 60 Hz for the visual stimulation would enable a resolution of 16.7 ms, but the technical setup did not permit this advantage to play out in our study. Dedicated amplifiers sending data packets as frequently as every 4 ms have been developed [8]. In our DS algorithm, the feature vectors to be classified were calculated from data windows which were growing with every iteration. We found that classification accuracy was compromised when providing the classifier with the raw training data having a fixed trial length of 4 seconds. The issue was solved by trimming the training data to the same length as the data to be classified. Also, the spatial filters A and B as well as the training feature vectors were re-calculated for each data length.

Results

Classification time and accuracy

We first analyzed the relation between the size of the data window and the accuracy of the classification. As expected, larger windows entailed higher classification accuracies (Fig 4A). Beyond around 2 s, the increase became marginal though, and the average offline accuracy approached 93%. The corresponding ITRs peak at data windows of 1 s or less for all participants but one (Fig 4B).
Fig 4

Effect of the data window size.

A: Classification accuracy. B: ITR.

Effect of the data window size.

A: Classification accuracy. B: ITR. We then analyzed the online performance of the DS classification for each participant. Fig 5 shows the distribution of ITRs in the online session. The average ITRs that participants reached across blocks range from 21.6 bits/min to 166.3 bits/min. Peak performance in a single block was at 262.8 bits/min, and the lowest performance was 9.29 bits/min. Whereas classification accuracy was higher in the training session on average (0.93±0.08 vs. 0.85±0.08, paired Student’s t-test p = 1 × 10−5), the early stopping in the online session yielded a substantially increased ITR (28.39±6.4 bits/min vs. 81.04±44.53 bits/min, p = 2.6 × 10−4).
Fig 5

Online and offline performance (ITR) of the participants.

Bars display the mean ITR across the 10 or more blocks that each participant completed; whiskers show the standard deviation. Asterisks mark differences of the ITRs for the optimal fixed window size and dynamic stopping at the 0.05 significance level (paired Student’s t-test).

Online and offline performance (ITR) of the participants.

Bars display the mean ITR across the 10 or more blocks that each participant completed; whiskers show the standard deviation. Asterisks mark differences of the ITRs for the optimal fixed window size and dynamic stopping at the 0.05 significance level (paired Student’s t-test). We also optimized the data window size to maximize ITR on the training data, and we used this window to estimate the ITR that we would have observed in the online experiment if a fixed data window had been used. This was possible because DS resulted in data windows that typically were larger than the optimal fixed size. The comparison between DS and the optimal fixed window in Fig 5 suggests that DS improved the ITR for most participants; in 6 of them this improvement reached a statistical significance of 0.05. Only for two participant the optimal fixed window performed significantly better than DS. Numerical results for the DS method are listed in Table 1.
Table 1

Performance of the dynamic classification for each subject.

SubjectClassif. time [s]Accuracy onlineAccuracy offlineITR [bits/min]
11.30.770.8950.95
21.30.911.0081.91
30.70.911.00166.30
41.40.910.9774.66
51.00.940.96116.56
61.30.940.9588.71
71.90.800.8938.76
82.80.750.7922.78
91.90.880.9751.50
102.10.660.7521.61
110.70.921.00150.26
120.80.861.00108.58
130.90.860.93103.91
141.20.790.9559.45
Mean1.40.900.9381.14
Std. dev.0.60.080.0844.43
The average size of the dynamic window in the online session was between 1 and 1.5 s for all five targets. Target 3, at the bottom of the flicker stimulus, had the longest average window size and the largest variance; target 1, in the center, required shorter windows and was recognized with the lowest variance in window size (Fig 6A). We reckon that this is related to the size of the flicker in the visual field when the participant gazed at the respective target. This size was small when gazing at the bottom and large when gazing at the center of the stimulus, thus stimulating larger areas on the retina and in the cortex and leading to stronger responses. In addition, the SSVEP response seems to be stronger for stimuli in the lower visual field than in the upper [11]. This, however, does not seem to impact the classification accuracy, which was similar between targets 3 and 1 with a few more errors for the latter (Fig 6B).
Fig 6

Classification accuracy (A) and trial duration (B) for each target.

Effect of the SSVEP response magnitude

The performance variation across participants that is evident from Fig 5 prompted us to investigate whether it bears a relation to the magnitude of the SSVEP response. We quantified the response strength by the ratio of the EEG power at the stimulation frequency and the average power in the interval from 58.75 Hz to 61.25 Hz excluding 60 Hz (signal-to-noise ratio, SNR). We found indeed that the median ITR is directly proportional to the response strength (r = 0.79, p = 7e-4, Fig 7).
Fig 7

Relation between SSVEP response magnitude (on the abscissa) and ITR (on the ordinate) for all participants (numbered dots).

Performance over time

Every participant completed at least 10 blocks in the online session. This allowed us to assess the stability of the performance across the time of usage. To this end, we calculated the percentage of change of the ITR over the course of the online session relative to the ITR from the first online block. Fig 8 suggests that there is a weak trend towards higher ITRs over time.
Fig 8

ITR change over the online session w.r.t. the first online block for each participant.

Each subject completed a minimum of 10 blocks and continued thereafter at their own discretion.

ITR change over the online session w.r.t. the first online block for each participant.

Each subject completed a minimum of 10 blocks and continued thereafter at their own discretion.

Analysis of meta parameters

The variable data length required some considerations concerning the bandpass filtering of the raw data. The filter order of the bandpass filter constrains the minimum data length. In order to classify short data windows without deteriorating accuracy caused by ineffective filtering, we searched for an optimum between a low filter order and high ITR values. To this end, we recorded additional data sets from subjects 10–14, who later also participated in the study. Analyzing these data sets, we found that a filter order of 66 yielded good results on average (Fig 9).
Fig 9

Effect of filter order on ITR.

To make the dynamic stopping as effective as possible, we searched for an optimum combination of the two stopping parameters: the minimum reliability of a classification P, calculated in the discriminant analysis, and the number of consecutive classifications leading to the same result N. We analyzed the ITR as a two-dimensional function of these two parameters. Based on Fig 10, P = 0.95 and N = 2 were chosen for all subjects.
Fig 10

Influence of parameters N and P of the dynamic stopping on the ITR.

Discussion

The proposed DS method yielded a performance improvement for the majority of the participants in the study. For 6 of the 14 participants, the ITR showed a systematic increase in comparison to a fixed trial duration. In another 6 participants, there was no or only a small improvement. ITR was lower for dynamically adjusted trial durations in the remaining two participants. A comparison with related studies on dynamic stopping methods for frequency-coded SSVEP BCI shows that the spatially-coded SSVEP BCI gains the largest relative increase in ITR when the trial duration is dynamic (see Table 2). Attention should be paid, however, to the method for evaluating the performance for the fixed optimal trial length. Some studies use a training data set for finding the optimal trial length and then evaluate the performance on the same data, likely leading to an overestimation of the performance. In contrast, we determined the optimal fixed trial duration from the training data, and we evaluated it on the same data like our DS method, i.e., the online data. This was possible because the dynamic trial durations were typically a little longer than the fixed optimal length.
Table 2

Comparison of ITR with related studies.

ReferenceITR (bit/min)Improvement (%)
Fixed lengthDynamic length
[3]37.7141.088.94
[5]57.3
[6]134.25164.7222.7
[7]239.6257.67.5
[8]300.7330.49.9
our study63.4481.1427.9
The observed classification accuracies of over 80% for trials of 1 s or longer confirm the finding in earlier studies that high flicker frequencies still afford reliable classification of the SSVEP response despite lower response amplitudes [10–13, 26]. Across the participants of the study, performance ranged from about 22 to 166 bits/min. In order to find possible causes of this substantial variation, the relation between ITR and SSVEP response strength was analyzed. We found a significant correlation between these parameters. Thus, the effectiveness of the DS depends on the individual SNR of the SSVEP response. To remedy the performance variation, we suggest running a quick test of the SSVEP SNR before using the BCI and adjust the flicker frequency as appropriate. In general, if more SSVEP BCI studies would employ high-frequency stimulation, BCI researchers could glean a better overview of the response properties across the population. As mentioned before, the average trial duration of 1-1.5 s (see Fig 6A) was typically longer than the optimal fixed window size of 1 s or below (cf. Fig 4). This may indicate that our stopping criterion is rather conservative and has room for improvement. Whereas the average trial duration of the DS method was calculated from online data, the optimal fixed window size however was estimated from training data which were recorded with a fixed trial duration of 4 s. Hence the difference may also result from a generally lower classification accuracy in the online session incurred by the tighter timing of the dynamic regime. For one thing, memorizing the target locations and focusing them in a sequence during the online session may have been more challenging than just gazing at the cued location and switching to the next at a low pace during the training. For another, the exertion caused by trying to improve the own performance in each new block might have degraded the EEG signal quality in the online session in comparison to that in the training session. Despite the challenges of the fast gaze changes in the online session, we observed no deterioration of the performance over the usage time of the BCI. There are two factors which might have influenced the performance over time. On the one hand, a training effect which was fueled by the auditory feedback and the participants’ inducement to reach their personal minimum time might have caused an increase of the BCI performance. On the other hand, the participants also experienced fatigue over the course of the online experiment. It seems that both effects roughly balanced out. With respect to a potential training effect in the application phase we would like to point out that the online session may implicitly have worked as a neurofeedback system. The auditory feedback informed the participant whether the command was correctly recognized or not. In addition, DS generated feedback through the recognition latency. When trial duration is dynamically adjusted, better EEG signal quality allows the classifier to use shorter data segments and hence output the command faster. Participants may have experienced the joint accuracy and latency feedback as rewarding, and it might have trained them to generate EEG signals with better discriminability. Therefore, it would be interesting to investigate the behavior of performance over longer usage times. Further improvements may be achieved by a more detailed consideration of the meta-parameters of the DS method. In this study, the number of repetitions N and the probability threshold P were fixed for all participants. We have indication though that subject-specific choices can likely improve the performance for some participants. Since we observed a direct relation between the strength of the SSVEP response and the ITR, adjusting the stimulation frequency to the individual response properties could be an additional means to fully exploit the potential of our DS method to improve the performance of the spatially-coded SSVEP BCIs. 8 Jul 2021 PONE-D-21-13493 Instant classification for the spatially-coded BCI PLOS ONE Dear Dr. Maye, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process. Please submit your revised manuscript by Aug 22 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see:  http://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols . Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at  https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols . We look forward to receiving your revised manuscript. Kind regards, Saeed Mian Qaisar, Ph.D. Academic Editor PLOS ONE Additional Editor Comments: Dear Authors, Reviewers have now commented on your paper. They are advising that you revise your manuscript. If you are prepared to undertake the work required, I would be please to reconsider my decision. While submitting the revised version of manuscript, it is recommended that you submit the dataset, used in this study, in .csv or in .mat (MATLAB) format as a complementary zip file. The reviewer comments can be found at the end of this email or can be accessed online. Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf 2. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. 3. We note that you have stated that you will provide repository information for your data at acceptance. Should your manuscript be accepted for publication, we will hold it until you provide the relevant accession numbers or DOIs necessary to access your data. If you wish to make changes to your Data Availability statement, please describe these changes in your cover letter and we will update your Data Availability statement to reflect the information you provide. 4. Please ensure that you refer to Figure 2 in your text as, if accepted, production will need this reference to link the reader to the figure. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: No ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: I have following questions: 1] The authors should provide the details of canonical correlation analysis (CCA). The authors should also mention and provide the details of software used for analysis and classification (Matlab /Python etc .. and code links) 2] Why authors selected only LDA classifier ? 3] The table mentioning the classification accuracies and other performance measure need to be added. 4] The authors should provide the dataset link. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 3 Sep 2021 Thank you very much for reviewing our manuscript. We think the issues raised by the reviewer are appropriate, and we addressed them in the revised manuscript as described below: 1] The authors should provide the details of canonical correlation analysis (CCA). The authors should also mention and provide the details of software used for analysis and classification (Matlab /Python etc .. and code links) We added a statement about the implementation of the CCA and the classifier we used in our analyses in the section “Material and methods/Data analysis”. 2] Why authors selected only LDA classifier ? In the same section, we now give the rationale for using LDA. 3] The table mentioning the classification accuracies and other performance measure need to be added. A table with numerical values of various performance measures has been added to the section “Results/Classification time and accuracy”. 4] The authors should provide the dataset link. We uploaded the dataset to the Zenodo repository and provide the link in the submission form. 20 Dec 2021
PONE-D-21-13493R1
Instant classification for the spatially-coded BCI
PLOS ONE Dear Dr. Maye, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
 
Please submit your revised manuscript by Feb 03 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Saeed Mian Qaisar, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (if provided): Dear Authors, Reviewers have now commented on your paper. They are advising that you revise your manuscript. If you are prepared to undertake the work required, I would be please to reconsider my decision. The reviewer comments can be found at the end of this email or can be accessed online. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed Reviewer #2: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes Reviewer #2: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: N/A Reviewer #2: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) Reviewer #2: Authors proposed spatially-coded SSVEP BCI study. Following comments should consider to improve the quality of the article. 1. The authors failed to show the novelty in their work. Please clarify the innovation in the study. 2. Please re-write the abstract as it looked a very simple one. Abstract should discuss three important things. (i). Limitations of available literature (ii) Method proposed by the author with technical information (iii) advantages and application of proposed method 3. I recommend authors to use multiscale principal component analysis (MSPCA) which is a combination of PCA and wavelets and useful for noise removal from network packets? The details of MSPCA can be found in “Motor imagery BCI classification based on novel two-dimensional modelling in empirical wavelet transform” 4. For BCI system, signal decomposition methods always play significant role. I recommend authors to have a look on following article “Motor Imagery EEG Signals Classification Based on Mode Amplitude and Frequency Components Using Empirical Wavelet Transform” 5. Did authors try to use non-linear features for correct identification in BCI? I recommend authors to include discussion of mean energy, mean Teager-Kaiser energy, SHANNON WAVELET ENTROPY and Log energy entropy. 6. The combination of signal decomposition with dimension reduction techniques along with neural networks can be one effective tool for both subject dependent and independent BCI frameworks. Authors need to discuss this issue; detail may be found in “Exploiting dimensionality reduction and neural network techniques for the development of expert brain–computer interfaces”. 7. The authors recorded dataset from very few subjects. Is it possible to collect dataset from more subjects? If it is not possible, at least a discussion is needed for a framework tested on 58 subjects. See following article “Towards the development of versatile brain-computer interfaces” 8. Please provide a comprehensive comparison of your study with the available literature in terms of classification accuracy, number of channels, features, and execution time with the following articles, “A new framework for automatic detection of motor and mental imagery EEG signals for robust BCI systems”, “A Matrix Determinant Feature Extraction Approach for Decoding Motor and Mental Imagery EEG in Subject Specific Tasks”, “Motor imagery BCI classification based on novel two-dimensional modelling in empirical wavelet transform”, “Identification of Motor and Mental Imagery EEG in Two and Multiclass Subject-Dependent Tasks Using Successive Decomposition Index” 9. Please provide the details of future direction and possible solutions to continue this topic. 10. Finally, I suggest authors to sit with English native speaker to improve the writing of proposed work. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 18 Feb 2022 1. The authors failed to show the novelty in their work. Please clarify the innovation in the study. We made the innovation more explicit in the rewritten Abstract. We think the two innovative aspects also become clear at the end of the Introduction. 2. Please re-write the abstract as it looked a very simple one. Abstract should discuss three important things. (i). Limitations of available literature (ii) Method proposed by the author with technical information (iii) advantages and application of proposed method. We rewrote the abstract according to the suggested structure. 3. I recommend authors to use multiscale principal component analysis (MSPCA) which is a combination of PCA and wavelets and useful for noise removal from network packets? The details of MSPCA can be found in “Motor imagery BCI classification based on novel two-dimensional modelling in empirical wavelet transform” We have received some more suggestions for alternative data processing and analysis methods during the revision process of our previous publications on the spatially-coded SSVEP BCI. We explored them all, but none of them outperformed the approach we describe in the manuscript. In addition, we also investigated new analysis methods for SSVEP BCIs like task-related component analysis (TRCA, Tanaka et al., 2013), individual-template CCA (Bin et al., 2011) or convolutional neural networks (Waytowich et al., 2018). For noise removal in SSVEP BCIs, the maximum signal fraction analysis method (MSFA, Wei et al., 2019) has been developed. We found that it can improve the classification accuracy in some circumstances, but that it does not provide an advantage for the system we present in the manuscript. Whereas the application of data processing methods which have shown to be useful in motor-imagery BCIs for SSVEP BCI is certainly an interesting idea, we suggest exploring their potential in a future study. 4. For BCI system, signal decomposition methods always play significant role. I recommend authors to have a look on following article “Motor Imagery EEG Signals Classification Based on Mode Amplitude and Frequency Components Using Empirical Wavelet Transform” See 5. 5. Did authors try to use non-linear features for correct identification in BCI? I recommend authors to include discussion of mean energy, mean Teager-Kaiser energy, SHANNON WAVELET ENTROPY and Log energy entropy. The correlation features we used in the study yielded excellent classification results in our previous studies on this paradigm, and similar features are frequently used in other SSVEP BCI systems. We are always interested in evaluating alternative methods which can increase the classification accuracy. The focus of our current manuscript however is on dynamic stopping; therefore, we suggest comparing different methods for feature extraction in a separate study. 6. The combination of signal decomposition with dimension reduction techniques along with neural networks can be one effective tool for both subject dependent and independent BCI frameworks. Authors need to discuss this issue; detail may be found in “Exploiting dimensionality reduction and neural network techniques for the development of expert brain–computer interfaces”. The recommended article proposes LDA as one of several methods for dimension reduction, and we employ this method in our approach for classification. We would like to point out that the features calculated by eq. (1) are 10-dimensional. Compared to the problem which is analyzed in the recommended article, this already is a rather low-dimensional feature space. Whereas the development of a user-independent version of the spatially-coded SSVEP BCI is an interesting endeavor, it was not the focus of the current study. 7. The authors recorded dataset from very few subjects. Is it possible to collect dataset from more subjects? If it is not possible, at least a discussion is needed for a framework tested on 58 subjects. See following article “Towards the development of versatile brain-computer interfaces” A main difference between the recommended article and our study is that we ran an online experiment with participants in our laboratory. Hence, we would like to compare the number of participants in our study with related studies on dynamic stopping which also included an online experiment: Reference in the manuscript Number of participants 3 11 5 12 6 14 7 12 8 12 24 10 25 10 Our study 14 In the light of these numbers, it seems that the size of our cohort is quite standard. 8. Please provide a comprehensive comparison of your study with the available literature in terms of classification accuracy, number of channels, features, and execution time with the following articles, “A new framework for automatic detection of motor and mental imagery EEG signals for robust BCI systems”, “A Matrix Determinant Feature Extraction Approach for Decoding Motor and Mental Imagery EEG in Subject Specific Tasks”, “Motor imagery BCI classification based on novel two-dimensional modelling in empirical wavelet transform”, “Identification of Motor and Mental Imagery EEG in Two and Multiclass Subject-Dependent Tasks Using Successive Decomposition Index” All the recommended articles seem to study MI BCIs, which involve different task instructions for the operator and employ other signal analysis methods than SSVEP BCIs. We think, therefore, that a comparison of our paradigm with (frequency-coded) SSVEP BCIs and in particular BCIs with a dynamic trial duration would be more appropriate. We added a respective table to the Discussion. 9. Please provide the details of future direction and possible solutions to continue this topic. We mention a number of suggestions for future developments and studies at the end of the Discussion. 10. Finally, I suggest authors to sit with English native speaker to improve the writing of proposed work. We overhauled the manuscript, trying to improve style and fix grammar errors. Should concerns about our writing persist, a few examples for which aspects need improvement would be appreciated. Submitted filename: response to reviewers 2.pdf Click here for additional data file. 17 Mar 2022
PONE-D-21-13493R2
Instant classification for the spatially-coded BCI
PLOS ONE Dear Dr. Maye, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
 
Please submit your revised manuscript by May 01 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript:
If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Saeed Mian Qaisar, Ph.D. Academic Editor PLOS ONE Journal Requirements: Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. Additional Editor Comments: Dear Author, Reviewers have now commented on your paper. They are advising that you revise your manuscript. If you are prepared to undertake the work required, I would be please to reconsider my decision. The reviewer comments can be found at the end of this email or can be accessed online. Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #3: (No Response) ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #3: No ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #3: This paper proposes a new SSVEP-BCI online system based on spatially encoded SSVEP-BCI and dynamic time windows. The system utilizes CCA- and LDA-based SSVEP EEG signals to acquire subjects' intentions. And this paper proposes a dynamic time window based on Bayes' theorem and a special stopping strategy to realize the dynamic change of EEG data length. The research is of great significance to the practical application of BCI technology. However, this article leaves some gaps in the details. The specific questions are as follows. I am personally optimistic about the results of these experiments and look forward to the author's team's follow-up supplements to the paper. 1. Can you describe the workflow of the stimulus interface in detail? 2. We noticed that the experimental results between the best subjects and the worst subjects are very different. What is the reason for this? 3. Please list the promotion ratio in Table 2 to improve the readability of the paper. 4. The serial number is not indicated in Figure 5 to Figure 7 in the picture area. 5. Is there a significant difference between the test results of the offline test and the online test after using the dynamic time window? 6. A study needs to be discussed in this study, such as “A Dynamically Optimized SSVEP Brain–Computer Interface (BCI) Speller[J]. IEEE Transactions on Biomedical Engineering, 2015, 62(6): 1447-1456.”. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #3: Yes: Erwei Yin [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
8 Apr 2022 1. Can you describe the workflow of the stimulus interface in detail? The workflow in the training session is described on lines 100–108 and in the online session on lines 114--129 in the section „Stimulation and experimental procedure“. The interface only generated the visual flicker (line 91) and a cue (line 101, Figure 1) which directed the participant’s gaze to the respective target location (numbered 1-5, see Fig. 1) relative to the flicker. We cannot think of any other details about the stimulation or the workflow that might be missing, but we will readily add them if the reviewer makes a more specific request. 2. We noticed that the experimental results between the best subjects and the worst subjects are very different. What is the reason for this? There are several studies which investigated possible causes for the strong variability in BCI performance across people (e.g., Allison et al., BCI Demographics: How Many (and What Kinds of People) Can Use an SSVEP BCI?, IEEE TNSRE 18(2), 2010), but as far as we know, no factors have been identified. We made an attempt to contribute to this line of research and show in Fig. 7 that the ITR is related to the individual SSVEP response strength of the participants. Of course the next question then is what the reason for this difference in response strength might be. We regret to not have an answer. We nevertheless tried to better understand the observed performance difference and rearranged the bar plot in Fig. 5 by sorting the participants according to their ITR in the online session. The new arrangement suggests that the two subjects with the lowest performance are part of a continuum rather than outliers. 3. Please list the promotion ratio in Table 2 to improve the readability of the paper. We added the corresponding values to Table 2. 4. The serial number is not indicated in Figure 5 to Figure 7 in the picture area. This was a problem during the submission process which we solved now, so no changes were necessary in the manuscript. 5. Is there a significant difference between the test results of the offline test and the online test after using the dynamic time window? We added a comparison of classification accuracies and ITRs to the Results section. 6. A study needs to be discussed in this study, such as “A Dynamically Optimized SSVEP Brain–Computer Interface (BCI) Speller[J]. IEEE Transactions on Biomedical Engineering, 2015, 62(6): 1447-1456.”. The manuscript discusses this article on line 27–29, lists the results in Table 2 and cites it in the references [3]. Nevertheless, we added a few more details on lines 30--32. Submitted filename: response to reviewers 3.pdf Click here for additional data file. 12 Apr 2022 Instant classification for the spatially-coded BCI PONE-D-21-13493R3 Dear Dr. Maye, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Saeed Mian Qaisar, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Dear Authors, I am pleased to tell you that your work has now been accepted for publication in the PLOS ONE Journal. Thank you for submitting your work to PLOS ONE Journal. 20 Apr 2022 PONE-D-21-13493R3 Instant classification for the spatially-coded BCI Dear Dr. Maÿe: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Saeed Mian Qaisar Academic Editor PLOS ONE
  23 in total

Review 1.  Brain-computer interfaces for communication and control.

Authors:  Jonathan R Wolpaw; Niels Birbaumer; Dennis J McFarland; Gert Pfurtscheller; Theresa M Vaughan
Journal:  Clin Neurophysiol       Date:  2002-06       Impact factor: 3.708

2.  Effect of higher frequency on the classification of steady-state visual evoked potentials.

Authors:  Dong-Ok Won; Han-Jeong Hwang; Sven Dähne; Klaus-Robert Müller; Seong-Whan Lee
Journal:  J Neural Eng       Date:  2015-12-22       Impact factor: 5.379

3.  Utilizing Retinotopic Mapping for a Multi-Target SSVEP BCI With a Single Flicker Frequency.

Authors:  Alexander Maye; Dan Zhang; Andreas K Engel
Journal:  IEEE Trans Neural Syst Rehabil Eng       Date:  2017-04-25       Impact factor: 3.802

4.  Optimizing event-related potential based brain-computer interfaces: a systematic evaluation of dynamic stopping methods.

Authors:  Martijn Schreuder; Johannes Höhne; Benjamin Blankertz; Stefan Haufe; Thorsten Dickhaus; Michael Tangermann
Journal:  J Neural Eng       Date:  2013-05-20       Impact factor: 5.379

5.  Use of high-frequency visual stimuli above the critical flicker frequency in a SSVEP-based BMI.

Authors:  Takeshi Sakurada; Toshihiro Kawase; Tomoaki Komatsu; Kenji Kansaku
Journal:  Clin Neurophysiol       Date:  2014-12-23       Impact factor: 3.708

6.  A novel training-free recognition method for SSVEP-based BCIs using dynamic window strategy.

Authors:  Yonghao Chen; Chen Yang; Xiaogang Chen; Yijun Wang; Xiaorong Gao
Journal:  J Neural Eng       Date:  2021-03-08       Impact factor: 5.379

7.  Performance assessment in brain-computer interface-based augmentative and alternative communication.

Authors:  David E Thompson; Stefanie Blain-Moraes; Jane E Huggins
Journal:  Biomed Eng Online       Date:  2013-05-16       Impact factor: 2.819

8.  Asynchronous BCI control using high-frequency SSVEP.

Authors:  Pablo F Diez; Vicente A Mut; Enrique M Avila Perona; Eric Laciar Leber
Journal:  J Neuroeng Rehabil       Date:  2011-07-14       Impact factor: 4.262

9.  Dynamic time window mechanism for time synchronous VEP-based BCIs-Performance evaluation with a dictionary-supported BCI speller employing SSVEP and c-VEP.

Authors:  Felix Gembler; Piotr Stawicki; Abdul Saboor; Ivan Volosyak
Journal:  PLoS One       Date:  2019-06-13       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.