| Literature DB >> 34306590 |
Yunfa Fu1,2,3,4, Rui Chen1,2, Anmin Gong5, Qian Qian1,2,4, Ning Ding3, Wei Zhang6, Lei Su1,2, Lei Zhao2,7.
Abstract
Brain-computer interaction based on motor imagery (MI) is an important brain-computer interface (BCI). Most methods for MI classification are based on electroencephalogram (EEG), and few studies have investigated signal processing based on MI-Functional Near-Infrared Spectroscopy (fNIRS). In addition, there is a need to improve the classification accuracy for MI fNIRS methods. In this study, a deep belief network (DBN) based on a restricted Boltzmann machine (RBM) was used to classify fNIRS signals of flexion and extension imagery involving the left and right arms. fNIRS signals from 16 channels covering the motor cortex area were recorded for each of 10 subjects executing or imagining flexion and extension involving the left and right arms. Oxygenated hemoglobin (HbO) concentration was used as a feature to train two RBMs that were subsequently stacked with an additional softmax regression output layer to construct DBN. We also explored the DBN model classification accuracy for the test dataset from one subject using training dataset from other subjects. The average DBN classification accuracy for flexion and extension movement and imagery involving the left and right arms was 84.35 ± 3.86% and 78.19 ± 3.73%, respectively. For a given DBN model, better classification results are obtained for test datasets for a given subject when the model is trained using dataset from the same subject than when the model is trained using datasets from other subjects. The results show that the DBN algorithm can effectively identify flexion and extension imagery involving the right and left arms using fNIRS. This study is expected to serve as a reference for constructing online MI-BCI systems based on DBN and fNIRS.Entities:
Mesh:
Year: 2021 PMID: 34306590 PMCID: PMC8263279 DOI: 10.1155/2021/5533565
Source DB: PubMed Journal: J Healthc Eng ISSN: 2040-2295 Impact factor: 2.682
Figure 1Experimental paradigm. (a) Timing of task prompts and task execution in a trial; (b) timing of a session.
Figure 2(a) Schematic diagram of layout of the fNIRS signal acquisition probe (16 channels, 6 transmit probes, and 8 receive probes). Black rectangles indicate fNIRS source transmitter probes; black circles indicate fNIRS receiver probes, while black solid lines indicate fNIRS channels. (b) Experimental setup.
Figure 3Schematic diagram of a single RBM model.
Figure 4Schematic diagram of the DBN structure, V is the visible layer, H is the hidden layer, and the top is the output layer.
Figure 5Average fNIRS response and topographic map. (a) The average fNIRS response of flexion and extension movement involving the right and left arms and topographic map of HbO concentration in the motor cortex during the first 8 seconds of performing the task. (b) Average fNIRS response for flexion and extension imagery involving the right and left arms and topographic map of HbO concentration in the motor cortex during the first 8 seconds of performing the task. In the figure, L-HbO and R-HbO are the z value of HbO concentration for flexion and extension movement or imagery involving the right and left arms; L-HbR and R-HbR are the z value of HbR concentration for flexion and extension movement or imagery involving the right and left arms; L-FE_ME: left arm flexion and extension execution; R-FE_ME: right arm flexion and extension execution; L-FE_MI: left arm flexion and extension imagery; R-FE_MI: right arm flexion and extension imagery.
DBN classification accuracy for flexion and extension movement or imagery involving the left and right arms (%).
| Subject number | Flexion and extension movement for right and left arms | Flexion and extension imagery for right and left arms |
|---|---|---|
| S1 | 86.90 | 81.79 |
| S2 | 82.32 | 79.23 |
| S3 | 81.47 | 70.94 |
| S4 | 85.79 | 79.29 |
| S5 | 76.23 | 73.97 |
| S6 | 89.21 | 82.93 |
| S7 | 84.79 | 78.56 |
| S8 | 82.29 | 81.47 |
| S9 | 78.04 | 73.96 |
| S10 | 86.45 | 79.79 |
| Mean | 84.35 ± 3.86% | 78.19 ± 3.73% |
DBN model classification results (%) for test dataset from one subject trained with training dataset from other subjects.
| MODLE | ( | S1 | S2 | S3 | S4 | S5 | S6 | S7 | S8 | S9 | S10 |
|---|---|---|---|---|---|---|---|---|---|---|---|
| DBN_S1 | (10, 30) |
| 78.97 | 80.43 | 76.89 | 78.52 | 76.87 | 78.69 | 80.06 | 68.79 | 78.93 |
| DBN_S2 | (10, 10) | 77.92 |
| 76.27 | 75.06 | 77.92 | 75.97 | 77.67 | 74.26 | 75.31 | 69.93 |
| DBN_S3 | (10, 40) | 68.71 | 66.17 |
| 67.46 | 68.71 | 64.93 | 66.56 | 64.48 | 63.26 | 64.43 |
| DBN_S4 | (10, 10) | 79.25 | 77.49 | 76.85 |
| 75.43 | 79.25 | 77.49 | 71.85 | 77.29 | 69.25 |
| DBN_S5 | (10, 50) | 72.06 | 68.16 | 72.79 | 71.93 |
| 69.06 | 61.69 | 69.79 | 69.93 | 66.06 |
| DBN_S6 | (10, 30) | 79.56 | 77.43 | 76.87 | 78.25 | 76.32 |
| 79.23 | 67.26 | 66.76 | 79.31 |
| DBN_S7 | (10, 10) | 73.56 | 76.43 | 76.87 | 68.93 | 69.32 | 77.64 |
| 69.26 | 71.97 | 73.67 |
| DBN_S8 | (10, 30) | 77.06 | 76.83 | 75.97 | 76.43 | 76.12 | 75.34 | 75.56 |
| 76.67 | 77.67 |
| DBN_S9 | (10, 50) | 64.26 | 65.31 | 72.93 | 67.13 | 69.48 | 70.26 | 60.43 | 71.76 |
| 70.77 |
| DBN_S10 | (10, 30) | 75.96 | 77.23 | 69.45 | 77.9 | 65.87 | 74.12 | 75.94 | 76.87 | 76.98 |
|
DBN_S1, DBN_S2,…, DBN_S10, respectively, represent DBN models trained using training sets S1, S2,…, S10, and (h1, h2) are the number of units of two hidden layers of DBN.