Literature DB >> 34900346

A novel strategy for driving car brain-computer interfaces: Discrimination of EEG-based visual-motor imagery.

Zhouzhou Zhou1,2, Anmin Gong3, Qian Qian1,2, Lei Su1,2, Lei Zhao2,4, Yunfa Fu1,2.   

Abstract

A brain-computer interface (BCI) based on kinesthetic motor imagery has a potential of becoming a groundbreaking technology in a clinical setting. However, few studies focus on a visual-motor imagery (VMI) paradigm driving BCI. The VMI-BCI feature extraction methods are yet to be explored in depth. In this study, a novel VMI-BCI paradigm is proposed to execute four VMI tasks: imagining a car moving forward, reversing, turning left, and turning right. These mental strategies can naturally control a car or robot to move forward, backward, left, and right. Electroencephalogram (EEG) data from 25 subjects were collected. After the raw EEG signal baseline was corrected, the alpha band was extracted using bandpass filtering. The artifacts were removed by independent component analysis. Then, the EEG average instantaneous energy induced by VMI (VMI-EEG) was calculated using the Hilbert-Huang transform (HHT). The autoregressive model was extracted to construct a 12-dimensional feature vector to a support vector machine suitable for small sample classification. This was classified into two-class tasks: visual imagination of driving the car forward versus reversing, driving forward versus turning left, driving forward versus turning right, reversing versus turning left, reversing versus turning right, and turning left versus turning right. The results showed that the average classification accuracy of these two-class tasks was 62.68 ± 5.08%, and the highest classification accuracy was 73.66 ± 6.80%. The study showed that EEG features of O1 and O2 electrodes in the occipital region extracted by HHT were separable for these VMI tasks.
© 2021 Zhou Zhouzhou et al., published by De Gruyter.

Entities:  

Keywords:  EEG; Hilbert–Huang transformation; SVM; brain–computer interface; visual-motor imagery

Year:  2021        PMID: 34900346      PMCID: PMC8633586          DOI: 10.1515/tnsci-2020-0199

Source DB:  PubMed          Journal:  Transl Neurosci        ISSN: 2081-6936            Impact factor:   1.757


Introduction

Brain–computer interface (BCI) neural activity learned to generate user’s perception or cognitive activity patterns for controlling external devices [1]. BCI can improve the quality of life of patients suffering from severe motor disorders. It can also communicate and control modes of healthy people. BCI based on imaginative mental activity is an important brain–computer interaction. In BCI, the traditional imagery task is motor imagery (MI) [2,3], which requires subjects to feel or recall the movement of specific parts of their body, such as hands or feet [4,5,6,7,8,9,10]. However, learning and controlling MI mental activities are difficult and often require many hours of training. This could impose a burden on subjects, thereby reducing BCI users’ acceptance. The MI-BCI performance is associated with the amount of training received by the subjects. However, the classification performance of this BCI is unreliable [4,11] owing to a serious “BCI illiteracy” [12,13,14]. This can affect the performance and application of the BCI. Many earlier studies have achieved remarkable results on the MI mental activity (the neurological principle of brain structure and function) [15,16,17] and on analyzing and decoding the algorithm of brain signals related to MI [15]. However, BCI is limited to laboratory demonstrations, and its practical application is difficult [16]. Therefore, MI may be unsuitable in controlling BCI, especially for patients with motor disorders [15]. Using the MI method, it will be difficult to mentally rehearse the patients’ movement process. Compared with the above-mentioned MI mental activity, visual motor imagery (VMI) is a kind of mental activity that is easy to learn and control. It requires subjects to view a specific picture or scene in their brain from a third-person perspective [4,8,9,10]. The novel aspect of VMI is that it only requires fewer hours of training [15], and people will be proficient within a few hours since they often carry out such mental activities in their daily life. In addition, VMI is a natural mental activity similar to children viewing their mother’s face and adults viewing portraits. Thus, it is easy to reflect the image of an object in our brain and to mentally manipulate the object with various movements. We can also recall a movie or a social scene in our daily life. In particular, it is easy to reflect the movements of other people’s limbs in our brain. Thus, VMI activities contrast to MI activities, which require subjects to mentally rehearse movements of specific parts of their body but not the actual execution of a movement. That is, subjects must mentally rehearse an action and at the same time prevent the action from happening [4,5,12,13,14,16]. This is a contradictory confrontational process and an unnatural mental activity. This is difficult to implement and control as people rarely carry out such mental activities in their daily life. Compared with MI, VMI is a better mental task to control BCI [15]. Therefore, this study designs a new VMI-BCI experimental paradigm to identify the VMI tasks for BCI. Compared with the traditional MI-BCI, few studies focus on VMI-BCI [15]. Neuper et al. [18] explored kinesthetic MI, VMI, movement execution, and movement observation. Frequency band and electrode position are extracted as features and classified using the distinction sensitive learning vector quantization algorithm. From the results, the average classification accuracy of movement execution and movement observation is approximately 80%, the average classification accuracy of kinesthetic MI is approximately 67%, and the average classification accuracy of VMI is approximately 56%. Azmy and Safri [19] used BCI based on visual imagery electroencephalogram (EEG) to control a robot. In extracting the power spectrum features, the F8 electrode located in the frontal region is the best position for detecting VMI EEG, which responds to the oscillation rhythm in the alpha band. However, the classification accuracy and classifier are absent. Sousa et al. [20] classified three visual imagery tasks of static points, dynamic points moving vertically in two directions, and dynamic points moving in four directions of up, down, left, and right. The power spectrum energy feature is extracted and classified by a support vector machine (SVM) with an average classification accuracy of 87.64%. Kosmyna et al. [15] classified the visual imagination of predefined flowers and hammers. The feature of the power spectrum is extracted and classified by spectral weighed common spatial patterns. From the results, the classification accuracy is 52% of the visual imagination flower versus the visual imagination hammer. These studies on VMI-BCI limit the extracted features to the power spectrum, and it is necessary to introduce a new feature extraction method. Traditional EEG feature extraction methods are autoregressive (AR) model, adaptive model, wavelet transform, and common space model. The Fourier transform wavelet method uses sinusoidal components of different frequencies to fit the original signal. It is difficult for the wavelet method to achieve a high resolution in both the time domain and the frequency domain [21]. Compared with the wavelet transform, the Hilbert–Huang transform (HHT) can obtain high-resolution information simultaneously in the time domain and the frequency domain. HHT is a new signal processing method, proposed by Huang et al., suitable for nonlinear and nonstationary signal analysis [21]. EEG is a highly nonlinear and nonstationary signal. Sun et al. [22] show that EEG feature extraction for MI based on HHT is superior to the traditional wavelet transform and the feature extraction method without HHT processing. Thus, this study attempts to introduce HHT into VMI-BCI feature extraction to verify the effectiveness of the method. The rest of the article is organized as follows. Section 2 introduces the materials and methods in designing a novel VMI-BCI experimental paradigm; Section 3 presents the results; Section 4 provides the discussions; and Section 5 concludes the article.

Materials and methods

Subjects

Using the kinesthetic and visual-motor imagery questionnaire (KVMIQ) [23,24,25,26,27,28,29,30,31,32,33,34,35], a total of 30 healthy subjects (numbered sub1–sub30) with strong visual imagination (question score ≥70) were recruited to participate in data collection. All subjects were male, right handed, and aged 25 ± 1 years. Their visual acuity was normal or corrected to normal, and they have no mental illness. None of the subjects had taken part in this type of experiment. Informed consent: Informed consent has been obtained from all individuals included in this study. Ethical approval: The research related to human use has been complied with all the relevant national regulations, institutional policies, and in accordance the tenets of the Helsinki Declaration and has been approved by the Medical Ethics Committee of Kunming University of Science and Technology.

Experimental paradigm, process, and settings

Visual imagination paradigm design

To control the car or robot to move forward, move backward, turn left, and turn right naturally by mental activity, four VMI tasks were designed: imagine the car moving forward, reversing, turning left, and turning right as shown in Figure 1. The test began with a fixed cross on the screen for 10 s, prompting the subjects to start the experiment. The fixed cross disappeared, and the screen showed the animation of the car moving forward for 5 s. The subjects must observe the animation (i.e., visual observation) and pay attention to the direction of the car moving forward. When the prompted animation disappeared, the screen was empty. For 5 s, the subjects must retain the visual mental imagery of the prompted animation from a third-person perspective (i.e., visual imagination). Afterward, the screen presents an animation of the car reversing for 5 s, and the subjects were asked to observe the animation and pay attention to the car’s backward direction. After the prompted animation disappeared, the screen was blank, and the subjects were asked to perform the animation visual imagination for 5 s. By analogy, the task of visual observation versus visual imagination is prompted to turn left and right. Finally, the vehicle is prompted to rest for 5 s and complete a block test. Each participant completed 50 blocks and 50 trials for visual observation versus VMI of the car moving forward, reversing, turning left, and turning right.
Figure 1

Visual imagination experimental paradigm (animated in the experiment).

Visual imagination experimental paradigm (animated in the experiment).

Experimental process

Subjects were allowed to have enough sleep before the experiment to maintain a good mental state. Before the experiment, the main tester, the EEG acquisition equipment, was connected, and the relevant software was launched. Subjects were seated in comfortable chairs, about 70 cm away from the visual cue screen, with their hands flat on the table. The tester wears a brain cap and injects the brain electric cream into the subjects until the electrode impedance falls to a suitable range. In the beginning, the subjects completed the visual observation and visual imagery tasks according to the prompts of Figure 1. Subjects were asked to avoid movement and blinking during the VMI tasks.

Experimental settings

The experimental setup is shown in Figure 2. Earlier studies [15,18,20] revealed that the VMI was correlated with the right frontal cortex and parietal cortex. In this study, the designed schematic of the electrode layout is shown in Figure 2(a). Fp2, F7, F8, F3, F4, Fc3, Fc4, C3, C4, P3, P4, O1, O2, Fz, Fcz, Cz, Pz, and Oz are 18 recording channels, and the reference electrodes are A1 and A2. Figure 2(b) shows the experimental setup. The TCL 24 inch LCD is used for the VMI task, a Lenovo ThinkPad computer with a Windows 10 operating system is used for data acquisition and processing, and MATLAB is installed on the computer for data acquisition and data processing. The EEG amplifier is the NT9200 series of Beijing Zhongke Xintuo Instrument Co., Ltd., (Beijing, China) with a sampling rate of 1,000 Hz and a 45 Hz low-pass filter.
Figure 2

Electrode layout and experimental setup: (a) electrode layout (the reference electrodes are A1 and A2) and (b) (experimental setup (a) is the subject, (b) is the brain electrical amplifier, (c) is the data acquisition and processing computer, and (d) is the tip for visual imagination tasks).

Electrode layout and experimental setup: (a) electrode layout (the reference electrodes are A1 and A2) and (b) (experimental setup (a) is the subject, (b) is the brain electrical amplifier, (c) is the data acquisition and processing computer, and (d) is the tip for visual imagination tasks).

Visual imagination EEG signal processing

Data preparation

The collected visually imagined EEG data were examined visually, and the data of five subjects with serious EEG contamination were removed. Then, the EEG data of each subject, collected under different tasks, were extracted. The duration of each task was 5 s. To prevent interference of other signals, only the data segments from 1 s (corresponding to 0 s in Figure 3(a)) to 4 s after the task are extracted and analyzed. Figure 3(a) shows the visually imagined original EEG signals of the O1 versus O2 channels during forward and reverse driving of the car.
Figure 3

The original EEG signal of VMI, the total average waveform contrast of rest and VMI, the waveform after preprocessing, and the waveform after classification. (a) The original EEG signal of the O1 and O2 channels during visual imagery of the car forward versus reverse driving (a is the original EEG signal of the car driving forward and b is the original EEG signal of the car reverse). (b) A comparison of the total average waveform of the resting state versus the visual imagination. (c) (The waveforms of the O1 and O2 channels undergoing baseline drift correction, 8–13 Hz bandpass filtering, and ICA processing during visually imagined car forward versus reverse driving (a) is the waveform after processing when the car is driven forward and (b) is the waveform after processing when reversing)). (d) The average classification accuracy of the sub1 visually imagined car traveling forward versus reverse.

The original EEG signal of VMI, the total average waveform contrast of rest and VMI, the waveform after preprocessing, and the waveform after classification. (a) The original EEG signal of the O1 and O2 channels during visual imagery of the car forward versus reverse driving (a is the original EEG signal of the car driving forward and b is the original EEG signal of the car reverse). (b) A comparison of the total average waveform of the resting state versus the visual imagination. (c) (The waveforms of the O1 and O2 channels undergoing baseline drift correction, 8–13 Hz bandpass filtering, and ICA processing during visually imagined car forward versus reverse driving (a) is the waveform after processing when the car is driven forward and (b) is the waveform after processing when reversing)). (d) The average classification accuracy of the sub1 visually imagined car traveling forward versus reverse.

Preprocessing

First, the baseline drift of the extracted VMI EEG signal is corrected to eliminate the EEG signal deviation from the baseline [13]. According to Kosmyna et al. [15], the visual imaginary EEG signal is related to the alpha band and elliptic filters with 8–13 Hz bandpass. Then, independent component analysis (ICA) [36,37,38,39] removed the electro-ocular artifacts and electromyographic artifacts. Figure 3(c) shows the waveform after the above preprocessing.

HHT-based feature extraction and SVM classification

The HHT method includes empirical mode decomposition (EMD) and Hilbert spectrum analysis (HSA). The EMD obtains the intrinsic mode function (IMF) using the average of the upper and lower envelopes of the time series. The EMD process is as follows: first, all of the maximum and minimum points of the input signal are obtained. Then, the maximum value point and the minimum value point are fitted using a cubic spline. The curve of the upper and lower envelopes is obtained, and the mean value function is calculated to obtain the difference h between the analyzed signal and the mean value. Second, we examine whether h meets the IMF condition, and if it does, we treat h as the first IMF. Otherwise, the first two steps are obtained until the kth step satisfies the IMF condition. Then, we obtain the first IMF and calculate the difference r between the original signal and the IMF. Finally, the difference r is the signal decomposed until the remaining r is a monotonic signal or one pole [22]. From the above analysis, the expression of the original signal is shown in equation (1) [22]: where s(t) is the original signal, C (t) is the IMF component obtained by the ith screening, N is the number of screening times, and R (t) is the final residual component. All extractable IMFs are obtained after EMD, and HSA is performed. Equation (2) [22] is a Hilbert spectral transformation for each IMF component: The analytical signal of the original signal is as shown in equation (3) [22]: The instantaneous amplitude and the instantaneous phase are obtained from equations (4) and (5), respectively [22]: The average instantaneous energy is obtained from the instantaneous amplitudes according to equations (6) and (7) [22]: The left side of equations (6) and (7) is the instantaneous energy value. After EMD decomposition of the EEG signals of VMI, the IMF of each order is shown in Figure 4. Related studies have shown that [22] the IMF first three orders contribute to classification. The IMF first three orders are combined, and the average instantaneous amplitude is obtained using a Hilbert transform. The sixth-order AR model coefficients AR1–AR6 are extracted using the Burg algorithm. The 12-dimensional feature vector comprises the AR model coefficients of the O1 and O2 channels: {O1AR1, …, O1AR6, O2AR1, …, O2AR6}. Then, the SVM suitable for small sample classification is used for feature classification [40]. The classification was carried out using the MATLAB SVM library function and selected with Gauss Kernel Function. A total of 90% of each subject’s data were used to train the SVM model. Thus, 10% of the dataset is used to test the SVM model. A 10-fold cross-validation was implemented to evaluate the experimental performance. Figure 3(d) shows the average classification accuracy of the sub1 visual imaginary car driving forward versus reversing.
Figure 4

IMF of each order after EMD decomposition of the VMI EEG signal (the vertical axis represents the amplitude in microvolts).

IMF of each order after EMD decomposition of the VMI EEG signal (the vertical axis represents the amplitude in microvolts).

Results

Figure 3 shows the original EEG signal of VMI, the total average waveform contrast of rest and VMI, the waveform after preprocessing, and the waveform after classification. Figure 3(a) shows the original EEG signal of the O1 and O2 channels during visual imagery of forward driving versus reverse driving (a is the original EEG signal of the car driving forward and b is the original EEG signal of the car driving in reverse). Figure 3(b) shows a comparison of the total average waveform of the resting state versus visual imagination. Figure 3(c) shows the waveforms of the O1 and O2 channels undergoing baseline drift correction, 8–13 Hz bandpass filtering, and ICA processing during visual car forward versus reverse driving (where a is the waveform after the process when the car is driven forward and b is the waveform after the process when reversing). Figure 3(d) shows the average classification accuracy of the sub1 visually imagined car traveling forward versus reversing. From Figure 3(b), the EEG of VMI versus rest has two obvious voltage reversals of approximately 200 ms in the position of a and b. Figure 4 shows the IMF of each order after EMD VMI EEG signal decomposition. As shown in Figure 4, when the waveform is decomposed to the eighth layer, there is only one pole in the waveform, and the decomposition stops. At the same time, the first three orders of IMF contain the required time-frequency information, and the fourth-order IMF contains almost no time-frequency information, consistent with the theoretical analysis in Section 2.3. Table 1 shows the results of a two-way ANOVA analysis of EEG in each lead under four visual imagery tasks. The mean values of EEG of 25 subjects were calculated in four VMI tasks. The two factors were the type and lead of the visual imagery task. Table 1 shows the mean of visual imagery EEG, revealing that the main effects of electrodes Fp2, F7, P3, O1, Fcz, and Pz are significant (P < 0.05), whereas the main effects of other leads are insignificant.
Table 1

The results of the EEG analysis of each lead in four VMI tasks using two-way ANOVA (FDR correction)

LeadsdfMean square F P
Fp232811.9066.8790.000***
F731153.9965.0780.009**
F83700.2482.7840.097
F33496.7982.4760.118
F43260.1931.1420.413
Fc33256.3101.4720.330
Fc43390.1911.9320.200
C33158.1371.1910.413
C4322.7400.1840.907
P331376.6695.8230.006**
P43658.2432.7250.097
O13903.5913.4850.045*
O2386.3440.3860.808
Fz3216.1041.1080.413
Fcz3735.9233.6260.043*
Cz3480.5812.0780.182
Pz31518.6756.3850.000***
Oz380.2700.6580.650

Notes: *P < 0.05, **P < 0.01, and ***P < 0.001.

The results of the EEG analysis of each lead in four VMI tasks using two-way ANOVA (FDR correction) Notes: *P < 0.05, **P < 0.01, and ***P < 0.001. To verify the VMI movement direction as a mental strategy for BCI users, we carried out the feature extraction and pattern classification of EEG collected during car forward driving and reverse driving of VMI. Table 2 presents the average classification accuracy under different combinations of electrodes. The average classification accuracies of P3 versus P4, O1 versus O2, F7 versus F8, C3 versus C4, F3 versus F4, and Fc3 versus Fc4 were 59.9666 ± 6.7385, 64.7673 ± 4.2748, 53.5205 ± 2.5087, 51.9838 ± 1.8770, 51.5868 ± 1.9974, and 57.6417 ± 5.7230%, respectively. We selected few electrodes for classification as our aim was to explore a less-channel VMI-BCI brain-controlled robotic system. Thus, we carried out only two classifications.
Table 2

Average classification accuracy under different electrode combinations

SubjectsCombination of P3 versus P4 (%)Combination of O1 versus O2 (%)Combination of F7 versus F8 (%)Combination of C3 versus C4 (%)Combination of F3 versus F4 (%)Combination of Fc3 versus Fc4 (%)
sub162.110465.128455.021050.100157.058151.1177
sub253.156772.086957.060551.128455.072355.1079
sub354.161662.078155.139251.172950.083070.0967
sub470.084063.126551.115753.153850.024953.0591
sub569.012765.115753.009855.188051.043558.0181
sub656.070864.043051.126551.215350.083570.0620
sub758.117766.066452.131350.184650.157264.0684
sub863.093365.085453.115250.170951.121162.0034
sub953.128465.089852.021551.155350.110455.0767
sub1052.107463.124563.077151.105552.013754.1226
sub1153.100659.178755.122152.024956.010355.1484
sub1265.049362.188552.137751.004952.069363.1265
sub1354.024473.171453.091853.039151.044452.0244
sub1454.146555.111854.104051.004950.043958.1138
sub1568.159769.047451.139254.106450.038154.1489
sub1669.191972.032751.070352.107950.091862.0884
sub1758.095468.056253.027851.111351.118266.0752
sub1852.027865.181252.024953.158753.124551.0562
sub1969.005967.246652.102551.166554.101153.0142
sub2058.052762.226153.138251.093351.126555.0322
sub2157.034261.181652.143651.004450.077662.1289
sub2262.079169.198254.062057.052250.025454.1582
sub2352.058659.190955.005456.087451.001051.0806
sub2472.016662.161154.009351.052752.017156.0625
sub2564.081563.066953.017650.007351.015155.0532
Average classification rate59.966664.767353.520551.983851.586857.6417
Variance6.73854.27482.50871.87701.99745.7230
Average classification accuracy under different electrode combinations Table 3 shows the descriptive statistical results of EEG in each lead in four VMI tasks. Table 4 shows the results of principal effect analysis using a two-way ANOVA of different VMI tasks in Table 3. Table 4 shows the different leads between groups, which visually imagined the car moving forward, reversing, turning left, and turning right. Table 4 showed that the differences between groups of the Fp2, F7, P3, O1, Fcz, and Pz leads, revealing that they were significant (P < 0.05).
Table 3

The EEG descriptive statistical results in each lead during the four visual imagery tasks, with the standard deviation in parentheses and the mean value outside parentheses

LeadsVisual imagination driving forwardVisual imagination reversingVisual imagination turning leftVisual imagination turning right
Fp20.150 (26.75)0.789 (14.09)0.529 (19.02)0.804 (18.94)
Pz0.529 (16.08)0.729 (14.84)0.338 (14.95)0.850 (15.76)
P30.258 (17.80)0.657 (12.61)0.405 (15.37)0.713 (15.27)
F70.463 (16.63)0.738 (13.53)0.456 (14.83)0.844 (15.13)
Fcz0.453 (15.18)0.704 (13.38)0.455 (13.94)0.744 (14.40)
O10.460 (17.81)0.739 (14.50)0.563 (15.82)0.847 (16.09)
Table 4

Results of the main effect analysis of four visual imagery tasks under different leads (FDR correction)

LeadsSourceSum of squaresMean square F P
Fp2Between groups1813.3335.9654.8180.000
Intergroup148186.6671.238
Total150000.000
PzBetween groups422.0812.8332.2700.000
Intergroup149577.9191.248
Total150000.000
P3Between groups800.7323.9063.1360.006
Intergroup149199.2681.245
Total150000.000
F7Between groups662.1883.6382.9190.009
Intergroup149337.8121.246
Total150000.000
FczBetween groups453.1093.0012.4050.043
Intergroup149546.8911.248
Total150000.000
O1Between groups671.7613.4992.8070.045
Intergroup149328.2391.246
Total150000.000
The EEG descriptive statistical results in each lead during the four visual imagery tasks, with the standard deviation in parentheses and the mean value outside parentheses Results of the main effect analysis of four visual imagery tasks under different leads (FDR correction) Table 5 presents an average classification of 25 subjects visually imagining the car’s forward versus reverse, forward versus left, forward versus right, reverse versus left, reverse versus right, and left versus right movements. The average classification accuracies are 64.76 ± 4.27, 56.49 ± 4.63, 55.78 ± 3.09, 61.11 ± 4.56, 64.30 ± 7.13, and 73.66 ± 6.80%, respectively. The electrode combination is O1 versus O2.
Table 5

Average classification accuracy of different visual imagery task pairs for 25 subjects under electrode combinations O1 and O2

SubjectsVisual imagination driving forward versus reversing (%)Visual imagination driving forward versus turning left (%)Visual imagination driving forward versus turning right (%)Visual imagination reversing versus turning left (%)Visual imagination reversing versus turning right (%)Visual imagination left turn versus right turn (%)
sub165.128459.107960.030362.036163.068473.0737
sub272.086957.186055.009358.026958.109978.1523
sub362.078158.241752.015165.153374.034782.0259
sub463.126568.201254.025456.022056.074263.0039
sub565.115757.150953.061066.093372.063059.1274
sub664.043068.015155.086963.098165.132879.0356
sub766.066457.065956.031359.073773.070868.0308
sub865.085455.044958.026963.081566.011763.0020
sub965.089855.035258.110855.001064.028376.1270
sub1063.124556.008361.096257.093371.098675.1089
sub1159.178750.037655.121665.020051.021075.0151
sub1262.188558.109460.041056.022950.046967.0474
sub1373.171458.125053.029360.001559.045468.0166
sub1455.111857.087954.001068.106473.032782.0073
sub1569.047458.129956.124065.044465.013279.0547
sub1672.032759.206153.022954.016166.015673.0576
sub1768.056251.002955.038159.102572.096267.0610
sub1865.181250.051856.156773.129459.086968.1665
sub1967.246650.118752.124065.084055.074775.1328
sub2062.226159.092859.066962.068467.131385.0654
sub2161.181657.001052.044457.094759.022977.0098
sub2269.198255.130461.020059.020573.095776.0820
sub2359.190955.171456.060162.123061.105083.0894
sub2462.161152.053259.135359.140171.104571.1309
sub2563.066951.050350.125558.122163.092878.0713
Average classification rate64.767356.497055.784161.111064.307073.6678
Variance4.27484.63223.09044.56237.13106.8068
Average classification accuracy of different visual imagery task pairs for 25 subjects under electrode combinations O1 and O2 Figure 5 presents the correlation of 18 channels of EEG during VMI to examine the correlation between the signals of the channel in relevant brain regions during visual imagery. The thickness and color of the connection between the two nodes show the degree of their correlation. Figure 5 shows that O1 and F7 have the strongest connectivity during the car reversal period of the visual imagination. In the car left turn, the connection between O2 and F8 is the strongest. In the car right turns, the connection between O2 and Fp2 is the strongest. Thus, Cz and F7 have the strongest connectivity when the car is moving forward.
Figure 5

The correlation of 18 channels of EEG during VMI: (a) the correlation of the lead when the car moves forward in the visual imagination; (b) the correlation of the lead when the car reverses; (c) the correlation of the lead when the car turns left; and (d) the correlation of the lead when the car turns right.

The correlation of 18 channels of EEG during VMI: (a) the correlation of the lead when the car moves forward in the visual imagination; (b) the correlation of the lead when the car reverses; (c) the correlation of the lead when the car turns left; and (d) the correlation of the lead when the car turns right.

Discussion

Many earlier studies have made a remarkable research progress on MI-BCI [15,16,17]; however, research on VMI-BCI is slow. The design of BCI’s paradigm and feature extraction needs to be further explored. This article focuses on the VMI-BCI paradigm and feature extraction. At present, few studies have explored predefined VMI (guided by prompting visual observation). A research on this VMI is challenging, and it is necessary to design an appropriate experimental paradigm to evaluate the visual imagination ability of subjects and identify different visual imagination tasks. In this study, a new VMI-BCI paradigm was designed, in which subjects were asked to complete four VMI tasks: imagine a car moving forward, reversing, turning left, and turning right, and then, we use HHT to extract features and classify them. The experimental paradigm is different from that of Nataliya et al. [15], which requires subjects to visually imagine static flowers versus hammers, whereas this study requires subjects to visually imagine dynamic cars. Sousa [20] showed that dynamic visual imagination induces stronger brain activation than static visual imagination. The classification results of the different visual imagery tasks designed in this study (the average classification accuracy of the left versus right turns of the visual imagination car is 73.66 ± 6.80%) indicate that the experimental paradigm has good separability. Table 4 indicates that the visually imaginary car can be distinguished from forward, backward, left, and right turns by selecting the appropriate lead to extract features. This shows that the experimental paradigm of visual imagination is feasible. Second, the experimental paradigm is different from other experimental paradigms [15,18,19,20] based on the visual imagination of the same object (car), but the direction of movement is different. This design explores the difference of visual imagination in controlling direction variables. However, this design poses a challenge to feature extraction and classification. Better methods are needed to obtain classification results. In contrast, different driving directions of the VMI car can naturally correspond with the forward, backward, left, and right turn of a control robot or a car, which may provide the practical application of visual-motor-imagery-based brain-controlled robot or car. The EEG pattern induced by the VMI task is also closely related to the performance of subjects’ imaginative mental activities [4]. To this end, 25 healthy volunteers with strong VMI ability (questionnaire score ≥70) were recruited before the experiment according to the VMIQ [24,25,26,27,28,29,30,32] or the KVMIQ [23,31,33,34,35]. After the experiment, each participant was asked to fill in a questionnaire of the visual imagery car movement direction task. Compared with performing MI tasks, most subjects consider VMI tasks to be a mental task, which is easier to accomplish. Although all subjects did not perform specialized VMI training, they can complete the VMI task better. For the feature extraction of VMI-BCI, the EEG power spectrum estimation method is mainly used in the existing research [15,18,19,20,41,42], whereas the HHT method, which can obtain high-resolution information in both the time domain and the frequency domain, is used to extract features in this article. Thus, SVM was used to classify the forward and reverse, forward and left turn, forward and right turn, reverse and left turn, reverse and right turn, and left and right turn. Among them, the highest average classification accuracy is 73.66 ± 6.80%, and the average classification accuracy is 62.68 ± 5.08%. The average classification accuracy of this study is 6.68% points higher than that of Neuper et al. [18], in which the VMI is 56%. Compared with Kosmyna et al. [15], the average classification accuracy of the visual imagination flower and the visual imagination hammer is 52%, improved by 10.68 percentage points. This indicates that the features of VMI-EEG extracted from HHT have certain separability for VMI tasks. When extracting the EEG features of VMI using HHT, the selection and combination of electrodes have an impact on the classification accuracy of VMI tasks. Table 1 indicates that the EEG mean values of the electrodes Fp2, F7, P3, O1, Fcz, and Pz are different from the designed VMI task. Figure 5 illustrates the strongest connection between the electrode pairs O1 and F7, O2 and F8, O2 and Fp2, and Cz and F7 during visual imaginary car reversing, left turn, right turn, and forward travel. The classification results in Table 2 showed that the classification accuracy of the combination of O1 versus O2 electrodes was the highest. The two-way ANOVA in Tables 3 and 4 showed that the interaction between the O1 and O2 electrodes was significant during the VMI (P = 0.045, P < 0.05). The first reason for these results is that VMI is related to memory, whereas memory is related to the prefrontal cortex and the endothelial layer of the lateral interior of the parietal lobe (LIP area) [4,7]. The electrodes Fp2, F7, Fcz, P3, and Pz are located in these cortex areas. There are two visual information processing pathways: one is from the dorsal striate cortex to the parietal lobe, which mainly analyzes motion vision, and the second is from the ventral projection to the temporal lobe that mainly recognizes objects [4]. Therefore, the occipital region participates in the neuroprocessing of the VMI and can analyze the EEG features of the O1 versus O2 electrodes to identify the VMI task. The EEG features of electrodes Fp2, F7, Fcz, P3, and Pz in VMI have been analyzed, and the correlation between channel EEG and VMI [4,5,6,7,8,9,10,15,19,20] has been calculated. However, the EEG features of O1 and O2 electrodes in VMI have not been analyzed and extracted. The EEG features of O1 versus O2 electrodes in VMI were analyzed and extracted using HHT. The classification results in Table 5 showed that the designed VMI tasks can be distinguished. Compared with the traditional MI-BCI with more research, the task based on EEG recognition VMI is more challenging. The research is still insufficient, and the classification accuracy needs to be further improved. For this reason, feature selection and extraction and electrode combination optimization should be studied. In our future work, we will attempt to extract the correlation between channel EEGs in VMI as a feature to optimize the combination of electrodes and to improve the classification accuracy of VMI tasks. We will construct an online real-time VMI-BCI brain-controlled robot system with few channel EEGs.

Conclusion

Contrary to the traditional MI-BCI paradigm, this study designed a new VMI-BCI paradigm: visual imagination of a car moving forward, reversing, turning left, and turning right. These four mental strategies can control the car or robot to move forward, backward, left, and right. EEG features are extracted from HHT with high temporal and spatial resolution, and the designed VMI task is identified using an SVM suitable for small sample classification. Studies have shown that in the designed experimental paradigm, the above EEG extracted features at the O1 versus O2 electrode positions in the occipital region can distinguish visual imagination tasks (average classification accuracy can reach 73.66 ± 6.80%). Visual imagination car has the strongest connection between the electrode pairs O1 and F7, O2 and F8, O2 and Fp2, and Cz and F7 in reversing, turning left, turning right, and driving forward. The visual imagination driving car based on EEG is expected to be a BCI strategy, and VMI-BCI can be used as a window to observe brain function. This article can provide ideas for online real-time VMI-BCI brain-controlled robot systems based on less-channel EEG.
  20 in total

Review 1.  Brain-computer interfaces for communication and control.

Authors:  Jonathan R Wolpaw; Niels Birbaumer; Dennis J McFarland; Gert Pfurtscheller; Theresa M Vaughan
Journal:  Clin Neurophysiol       Date:  2002-06       Impact factor: 3.708

2.  Imagery of motor actions: differential effects of kinesthetic and visual-motor mode of imagery in single-trial EEG.

Authors:  Christa Neuper; Reinhold Scherer; Miriam Reiner; Gert Pfurtscheller
Journal:  Brain Res Cogn Brain Res       Date:  2005-10-19

3.  Movement imagery ability: development and assessment of a revised version of the vividness of movement imagery questionnaire.

Authors:  Ross Roberts; Nichola Callow; Lew Hardy; David Markland; Joy Bringer
Journal:  J Sport Exerc Psychol       Date:  2008-04       Impact factor: 3.016

4.  Psychometric quality of a revised version Vividness of Visual Imagery Questionnaire.

Authors:  Alfredo Campos; María José Pérez-Fabello
Journal:  Percept Mot Skills       Date:  2009-06

5.  An inquiry into the construct validity of the Vividness of Visual Imagery Questionnaire.

Authors:  P J Chara; D A Hamm
Journal:  Percept Mot Skills       Date:  1989-08

6.  Pure visual imagery as a potential approach to achieve three classes of control for implementation of BCI in non-motor disorders.

Authors:  Teresa Sousa; Carlos Amaral; João Andrade; Gabriel Pires; Urbano J Nunes; Miguel Castelo-Branco
Journal:  J Neural Eng       Date:  2017-08       Impact factor: 5.379

7.  Critiquing the Concept of BCI Illiteracy.

Authors:  Margaret C Thompson
Journal:  Sci Eng Ethics       Date:  2018-08-16       Impact factor: 3.525

8.  The Vividness of Visual Imagery Questionnaire: commentary on the Marks-Chara debate.

Authors:  S J McKelvie
Journal:  Percept Mot Skills       Date:  1990-04

9.  Controllability of motor imagery and transformation of visual imagery.

Authors:  E Naito
Journal:  Percept Mot Skills       Date:  1994-04

10.  The Kinesthetic and Visual Imagery Questionnaire is a reliable tool for individuals with Parkinson disease.

Authors:  Bubblepreet Randhawa; Susan Harris; Lara A Boyd
Journal:  J Neurol Phys Ther       Date:  2010-09       Impact factor: 3.649

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.