OBJECTIVE: When researchers evaluate brain-computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system's performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. APPROACH: For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw's information transfer rate as a special case, but addresses the latter's limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects' performance using an EEG-based BCI, a 'Direct Controller' (a high-performance hardware input device), and a 'Pseudo-BCI Controller' (the same input device, but with control signals processed by the BCI signal processing pipeline). MAIN RESULTS: Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits min(-1)). SIGNIFICANCE: Our approach provides a flexible basis for evaluating BCI performance and its limitations, across a wide range of tasks and task difficulties.
OBJECTIVE: When researchers evaluate brain-computer interface (BCI) systems, we want quantitative answers to questions such as: How good is the system's performance? How good does it need to be? and: Is it capable of reaching the desired level in future? In response to the current lack of objective, quantitative, study-independent approaches, we introduce methods that help to address such questions. We identified three challenges: (I) the need for efficient measurement techniques that adapt rapidly and reliably to capture a wide range of performance levels; (II) the need to express results in a way that allows comparison between similar but non-identical tasks; (III) the need to measure the extent to which certain components of a BCI system (e.g. the signal processing pipeline) not only support BCI performance, but also potentially restrict the maximum level it can reach. APPROACH: For challenge (I), we developed an automatic staircase method that adjusted task difficulty adaptively along a single abstract axis. For challenge (II), we used the rate of information gain between two Bernoulli distributions: one reflecting the observed success rate, the other reflecting chance performance estimated by a matched random-walk method. This measure includes Wolpaw's information transfer rate as a special case, but addresses the latter's limitations including its restriction to item-selection tasks. To validate our approach and address challenge (III), we compared four healthy subjects' performance using an EEG-based BCI, a 'Direct Controller' (a high-performance hardware input device), and a 'Pseudo-BCI Controller' (the same input device, but with control signals processed by the BCI signal processing pipeline). MAIN RESULTS: Our results confirm the repeatability and validity of our measures, and indicate that our BCI signal processing pipeline reduced attainable performance by about 33% (21 bits min(-1)). SIGNIFICANCE: Our approach provides a flexible basis for evaluating BCI performance and its limitations, across a wide range of tasks and task difficulties.
Authors: Jonathan R Wolpaw; Niels Birbaumer; Dennis J McFarland; Gert Pfurtscheller; Theresa M Vaughan Journal: Clin Neurophysiol Date: 2002-06 Impact factor: 3.708
Authors: Leigh R Hochberg; Daniel Bacher; Beata Jarosiewicz; Nicolas Y Masse; John D Simeral; Joern Vogel; Sami Haddadin; Jie Liu; Sydney S Cash; Patrick van der Smagt; John P Donoghue Journal: Nature Date: 2012-05-16 Impact factor: 49.962
Authors: Robert Leeb; Doron Friedman; Gernot R Müller-Putz; Reinhold Scherer; Mel Slater; Gert Pfurtscheller Journal: Comput Intell Neurosci Date: 2007
Authors: Wei Wang; Jennifer L Collinger; Alan D Degenhart; Elizabeth C Tyler-Kabara; Andrew B Schwartz; Daniel W Moran; Douglas J Weber; Brian Wodlinger; Ramana K Vinjamuri; Robin C Ashmore; John W Kelly; Michael L Boninger Journal: PLoS One Date: 2013-02-06 Impact factor: 3.240
Authors: Bin He; Bryan Baxter; Bradley J Edelman; Christopher C Cline; Wendy Ye Journal: Proc IEEE Inst Electr Electron Eng Date: 2015-05-20 Impact factor: 10.961
Authors: Bradley J Edelman; Jianjun Meng; Nicholas Gulachek; Christopher C Cline; Bin He Journal: IEEE Trans Neural Syst Rehabil Eng Date: 2018-05 Impact factor: 3.802
Authors: Jane E Huggins; Christoph Guger; Mounia Ziat; Thorsten O Zander; Denise Taylor; Michael Tangermann; Aureli Soria-Frisch; John Simeral; Reinhold Scherer; Rüdiger Rupp; Giulio Ruffini; Douglas K R Robinson; Nick F Ramsey; Anton Nijholt; Gernot Müller-Putz; Dennis J McFarland; Donatella Mattia; Brent J Lance; Pieter-Jan Kindermans; Iñaki Iturrate; Christian Herff; Disha Gupta; An H Do; Jennifer L Collinger; Ricardo Chavarriaga; Steven M Chase; Martin G Bleichner; Aaron Batista; Charles W Anderson; Erik J Aarnoutse Journal: Brain Comput Interfaces (Abingdon) Date: 2017-01-30