| Literature DB >> 35957360 |
Natasha Padfield1, Kenneth Camilleri1,2, Tracey Camilleri2, Simon Fabri2, Marvin Bugeja2.
Abstract
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) provide a novel approach for controlling external devices. BCI technologies can be important enabling technologies for people with severe mobility impairment. Endogenous paradigms, which depend on user-generated commands and do not need external stimuli, can provide intuitive control of external devices. This paper discusses BCIs to control various physical devices such as exoskeletons, wheelchairs, mobile robots, and robotic arms. These technologies must be able to navigate complex environments or execute fine motor movements. Brain control of these devices presents an intricate research problem that merges signal processing and classification techniques with control theory. In particular, obtaining strong classification performance for endogenous BCIs is challenging, and EEG decoder output signals can be unstable. These issues present myriad research questions that are discussed in this review paper. This review covers papers published until the end of 2021 that presented BCI-controlled dynamic devices. It discusses the devices controlled, EEG paradigms, shared control, stabilization of the EEG signal, traditional machine learning and deep learning techniques, and user experience. The paper concludes with a discussion of open questions and avenues for future work.Entities:
Keywords: brain–computer interface (BCI); brain–machine interface (BMI); control; electroencephalogram (EEG); endogenous; motor imagery (MI)
Mesh:
Year: 2022 PMID: 35957360 PMCID: PMC9370865 DOI: 10.3390/s22155802
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1A summary of the paper selection process formatted using the template available at [6].
Figure 2A year-by-year breakdown of the reviewed literature.
Figure 3A pie chart showing a breakdown of all the different BCI-controlled devices in the literature reviewed.
A summary of EEG paradigms, devices, and control functions in the literature for single-paradigm systems.
| Paper | Paradigm | Device | No. of Classes | Classes and Control Function | Accuracy |
|---|---|---|---|---|---|
| Choi, 2020 [ | Traditional MI | Lower limb exoskeleton | 3 | Gait MI—walking; sitting MI—sitting down; idle state—no action | 86% |
| Gordleeva, 2020 [ | 2 | MI of dominant foot—walking; idle—standing still | 78% | ||
| Wang, 2018 [ | 3 | Left-hand MI—sitting; right-hand MI—standing up; feet MI—walking | >70% | ||
| Liu, 2017 [ | 2 | Left-hand MI—moving left leg; right-hand MI—moving right leg | >70% | ||
| Ang, 2017 [ | Haptic robot | 2 | MI in the stroke-affected hand; idle state | ~74% | |
| Cantillo-Negrete, 2018 [ | Orthotic hand | 2 | MI in dominant hand (healthy subjects) or stroke-affected hand (patients)—moving; idle state—do nothing | >60% | |
| Xu, 2020 [ | Robotic arm | 4 | Left-hand MI—turn left; right-hand MI—turn right; both hands—move up; relaxed hands—move down | 78% (for left vs. right and up vs. down experiments); 66% (for left, right, up, and down experiments) | |
| Zhang, 2019 [ | 3 | Left-hand MI—turn left; right-hand MI—turn right; tongue MI—move forward | 73% | ||
| Xu, 2019 [ | 2 | Left-hand MI—left planar movements; right-hand MI—right planar movements | >70% | ||
| Edelman, 2019 [ | Robotic hand | 4 | Left-hand MI—left planar movements; right-hand MI—right planar movements; both-hands MI—upward planar movements; rest—downward planar movements | N/A | |
| Spychala, 2020 [ | 3 | MI hand flexion or extension for similar behavior in robotic hand, idle state—maintain hand posture | ~60% | ||
| Moldoveanu, 2019 [ | Robotic glove | 2 | Left-hand MI and right-hand MI—controlled movement of robotic glove | N/A | |
| Zhuang, 2021 [ | Mobile robot | 4 | Left MI—turn left; right MI—turn right; push MI—accelerate; pull MI—decelerate | N/A (>80% for offline) | |
| Batres-Mendoza, 2021 [ | 3 | Left-hand MI—turn left; right-hand MI—turn right; idle state—maintain behavior | 98% | ||
| Tonin, 2019 [ | 2 | Left-hand MI—turn left; right-hand MI—turn right. Idle rest state inferred from probability output of classifier. | ~80% | ||
| Hasbulah, [ | 4 | Left-hand MI—turn left; right-hand MI—turn right; left-foot movement—move forward; right-foot movement—move backward | 64% | ||
| Ai, 2019 [ | 4 | Left-hand MI—turn left; right-hand MI—turn right; both-feet MI—move forward; tongue MI—move backward | 80% | ||
| Jafarifarmand, 2019 [ | 2 | Left-hand MI—turn left; right-hand MI—turn right | N/A | ||
| Andreu-Perez, 2018 [ | 2 | Left-hand MI—turn right; right-hand MI—turn left. If the probability output of the classifier was less than 80%, maintain current state. | 86% | ||
| Cardoso, 2021 [ | Pedaling machine | 2 | Pedaling MI—cycle; idle state—remain stationary | N/A | |
| Romero-Laiseca, 2020 [ | 2 | Pedaling MI—cycle; idle state—remain stationary | ~100% (healthy subjects); ~41.2–91.67% (stroke patients) | ||
| Gao, 2021 [ | Prosthetic leg | 3 | Left-hand MI—walking on terrain; right-hand MI—ascending stairs; foot MI—descend stairs | N/A | |
| Yu, 2018 [ | Sequential MI | Wheelchair | 6 | Left hand, right hand, and idle state identified by classifier. Four commands obtained by sequential paradigm, used to execute six functions through a finite-state machine: start, stop, accelerate, decelerate, turn left, turn right. | 94% |
| Jeong, 2020 [ | Single-limb MI | Robotic arm | 6 | MI of same arm moving up, down, left, right, backward and forward, which were imitated by the robotic arm. | 66% (for a reach-and-grab task); 47% (for a beverage-drinking task) |
| Junwei, 2018 [ | Spelling | Wheelchair | 4 | Spell the desired commands: FORWARD, BACKWARD, LEFT, RIGHT | 93% |
| Kobayashi, 2018 [ | Self-induced emotive State | Wheelchair | 4 | Delight—move forward; anger—turn left; sorrow—turn right; pleasure—move backward. | N/A |
| Ji, 2021 [ | Facial movement | Robotic arm | 3 | Detect double blink, long blink, and normal blink (idle state) to navigate VR menus and interfaces to control a robotic arm | N/A |
| Li, 2018 [ | Prosthetic hand | 3 | Raised brow—hand opened; furrowed brow—hand closed; right smirk—rightward wrist rotation; left smirk—leftward wrist rotation | 81% | |
| Banach, 2021 [ | Sequential facial movement | Wheelchair | 7 | Eyes-open and eyes-closed states identified by the classifier. Seven commands generated using three-component encodings of the states. Commands: turn left, turn right, turn left 45°, accelerate, decelerate, forward, backward. | N/A |
| Alhakeem, 2020 [ | 6 | Eye blinks and jaw clench were used to create six commands using three-component encodings: forward, backward, stop, left, right, keep moving. | 70% |
A summary of EEG paradigms, devices, and control functions in the literature for multiparadigm systems.
| Paper | Paradigm | Device | No. of Classes | Classes and Control Function | Accuracy |
|---|---|---|---|---|---|
| Ortiz, 2020 [ | Traditional MI + attention | Lower-limb exoskeleton | 2 | Walk MI—walking; idle—just stand | Traditional MI: 63%; |
| Tang, 2020 [ | Traditional MI + facial movement | Wheelchair | 4 | Left-hand MI—turn left; right-hand MI—turn right; eye blink—go straight | 84% |
| Kucukyildiz, 2017 [ | Mental arithmetic + reading | Wheelchair | 3 | Idle—turn left; mental arithmetic—turn right | N/A |
Figure 4A taxonomy of the shared-control approaches proposed in the reviewed literature.
Figure 5Comparing different false-alarm approaches. The blue bars show the BCI classifier output at the previous time step, and the orange bars show the decision made at the current time step. The example is for a two-class problem in which A denotes the classifier label for one mental state and B is the label for the other state. Each mental state was related to a different movement in the dynamic device. During “no action” phases, movement of the device was paused. In this example, it was assumed that at the start, the BCI classifier was outputting in class A for a long period (more than eight consecutive samples). Four different approaches are presented, namely those by Chae et al. [73], Hortal et al. [74], Ai et al. [54] and Zhuang et al. [52].
Figure 6A pie chart illustrating the proportion of studies that used traditional machine learning and deep learning techniques.
Figure 7A bar plot showing the features used in machine learning systems.
Figure 8A bar plot showing the classifiers used in machine learning systems with indications of the features used with each classifier group.
Figure 9A histogram showing ranges of the number of subjects included in the studies reviewed. For each range, the right-hand number was included and the left-hand number was excluded.
A summary of the papers that included patients and the number of subjects considered.
| Paper | Condition | Number of Subjects |
|---|---|---|
| Spychala, 2020 [ | Stroke | 7 |
| Romero-Laiseca, 2020 [ | 2 | |
| Moldoveanu, 2019 [ | 32 | |
| Cantillo-Negrete, 2018 [ | 6 | |
| Ang, 2018 [ | 9 | |
| Frisoli, 2012 [ | 4 | |
| Soekadar, 2016 [ | 6 | |
| Do, 2013 [ | Paraplegia or tetraplegia | 10 |
| Pfurscheller, 2003 [ | 1 | |
| Pfurscheller, 2001 [ | 1 | |
| Kim, 2019 [ | Spinal injury | 2 |
| Junewi, 2019 [ | Neurodegenerative disease | 4 |