| Literature DB >> 25505878 |
Daniele De Massari1, Daniel Pacheco2, Rahim Malekshahi3, Alberto Betella2, Paul F M J Verschure4, Niels Birbaumer1, Andrea Caria1.
Abstract
The combination of Brain-Computer Interface (BCI) technology, allowing online monitoring and decoding of brain activity, with virtual and mixed reality (MR) systems may help to shape and guide implicit and explicit learning using ecological scenarios. Real-time information of ongoing brain states acquired through BCI might be exploited for controlling data presentation in virtual environments. Brain states discrimination during mixed reality experience is thus critical for adapting specific data features to contingent brain activity. In this study we recorded electroencephalographic (EEG) data while participants experienced MR scenarios implemented through the eXperience Induction Machine (XIM). The XIM is a novel framework modeling the integration of a sensing system that evaluates and measures physiological and psychological states with a number of actuators and effectors that coherently reacts to the user's actions. We then assessed continuous EEG-based discrimination of spatial navigation, reading and calculation performed in MR, using linear discriminant analysis (LDA) and support vector machine (SVM) classifiers. Dynamic single trial classification showed high accuracy of LDA and SVM classifiers in detecting multiple brain states as well as in differentiating between high and low mental workload, using a 5 s time-window shifting every 200 ms. Our results indicate overall better performance of LDA with respect to SVM and suggest applicability of our approach in a BCI-controlled MR scenario. Ultimately, successful prediction of brain states might be used to drive adaptation of data representation in order to boost information processing in MR.Entities:
Keywords: EEG; XIM; mental states decoding; mixed reality
Year: 2014 PMID: 25505878 PMCID: PMC4245910 DOI: 10.3389/fnbeh.2014.00415
Source DB: PubMed Journal: Front Behav Neurosci ISSN: 1662-5153 Impact factor: 3.558
Figure 1The eXperience Induction Machine (XIM) architecture is a mixed reality integrated framework that combines a sensing system to evaluate and measure complex physiological and psychological states with a number of actuators and effectors to coherently react to the user's actions. It is mainly constituted of an immersive room that covers a surface area of 5.5 × 5.5 m, with a height of 4 m. Eight video projectors display the scenarios into four projection screens (2.25 × 5 m) surrounding the MR room. Reprinted with permission from Betella et al. (2014).
Figure 2Top: The immersive XIM modeling the virtual maze used in the experiment (left). Center: View from the top of the labyrinth and the nine different targets (red spheres) that were placed in alternating corners of the path. The labyrinth size was 10 × 10 VR units (meters). Participants were required to navigate the squared spiral labyrinth until the central point (yellow sphere). Proximity to red spheres triggered the beginning of a different condition. In the first session the condition consisted of a 30 s calculation task. When the participant reached the red sphere, screen went black and a random 3-digit number was displayed in the graphical interface. The participant was asked to iteratively subtract 17 from a given number. After 30 s, the black screen faded out and the participant was asked to continue spatial navigation. In the second session, the condition consisted of a 30 s reading. The introduction of a scientific article was displayed and the participant was required to read it and press the space keyboard command when finished. Bottom: First person perspective of the labyrinth and a red sphere.
Figure 3Flow diagram depicting feature selection and estimation of classification performance for scheme 1. A similar flow diagram was used for scheme 2 but using a different numbers of blocks and classes (HW and LW conditions).
Selected parameters in scheme 1 (top) and scheme 2 for each subject (bottom).
| Sub 1 | {F#, FC#, C#, P#, CP#, O#} | Y | 5 | 10–15 | {F#, FC#, C#, P#, CP#, O#} | N | 5 | 3–10 |
| Sub 2 | {F#, FC#, C#, P#, CP#, O#} | Y | 5 | 7–15 | {F#, FC#, C#, P#, CP#, O#} | Y | 5 | 3–10 |
| Sub 3 | {F#, FC#, C#, P#, CP#, O#} | Y | 5 | 10–15 | {F#, FC#, C#, P#, CP#, O#, T7, T8} | Y | 5 | 3–10 |
| Sub 4 | {F#, FC#, C#, P#, CP#, O#} | Y | 5 | 10–15 | {F#, FC#, C#, P#, CP#, O#, T7, T8} | Y | 5 | 3–15 |
| Sub 5 | {F#, FC#, C#, P#, CP#, O#} | N | 5 | 10–15 | {F#, FC#, C#, P#, CP#, O#, T7, T8} | Y | 5 | 3–10 |
| Sub 1 | {F#, FC#, C#, P#, CP#, O#, T7, T8} | N | 5 | 7–15 | {F#, FC#, C#, P#, CP#, O#, T7, T8} | Y | 5 | 3–10 |
| Sub 2 | {F#, FC#, C#, P#, CP#, O#, T7, T8} | Y | 5 | 3–10 | {F#, FC#, C#, P#, CP#, O#, T7, T8} | Y | 5 | 3–15 |
| Sub 3 | {F#, FC#, C#, P#, CP#, O#, T7, T8} | Y | 5 | 3–10 | {F#, FC#, C#, P#, CP#, O#} | N | 5 | 3–15 |
| Sub 4 | {F#, FC#, C#, P#, CP#, O#} | Y | 5 | 3–15 | {F#, FC#, C#, P#, CP#, O#} | Y | 5 | 10–15 |
| Sub 5 | {F#, FC#, C#, P#, CP#, O#} | Y | 5 | 3–10 | {F#, FC#, C#, P#, CP#, O#} | N | 5 | 3–10 |
Results of the classification of spatial navigation, reading, and calculation.
| Sub1 | 75.63 | 0.60 | 81.59 | 0.69 |
| Sub2 | 54.76 | 0.36 | 89.72 | 0.84 |
| Sub3 | 73.36 | 0.49 | 79.07 | 0.61 |
| Sub4 | 60.35 | 0.36 | 76.56 | 0.62 |
| Sub5 | 64.29 | 0.45 | 89.56 | 0.82 |
| Average | 65.68 | 0.45 | 83.30 | 0.72 |
The accuracy, MCC, and average values for LDA and SVM classifiers are reported for each participant.
Results of the classification between LW and HW.
| Sub1 | 87.59 | 0.61 | 96.92 | 0.92 |
| Sub2 | 85.43 | 0.63 | 93.72 | 0.86 |
| Sub3 | 90.89 | 0.75 | 87.12 | 0.69 |
| Sub4 | 80.56 | 0.59 | 73.72 | 0.54 |
| Sub5 | 88.46 | 0.59 | 91.3 | 0.68 |
| Average | 86.59 | 0.63 | 88.56 | 0.74 |
The accuracy, MCC, and average values for LDA and SVM classifiers are reported for each participant.
Figure 4(A) Power spectra (grand average across all subjects) of two discriminative channels of all conditions in frontal and parietal areas (Fz and Pz). Solid, dashed, and dotted lines represent the grand average power spectrum for SPN, MER, and MEC tasks, respectively. Fz shows a clear power difference among all conditions in the 3–7 Hz band, whereas Pz shows a clear power difference in the 8–12 Hz band. (B) Power spectra (grand average across all subjects) of channels showing positive peaks during high (HW) as compared to low (LW) mental workload (left frontal area, F7, and central parietal region, Cp2).
Figure 5Topographic distribution of the spectral difference in the 3–7 Hz band (left) and in the 8–12 Hz band (right) between HW and LW (grand average across all subjects). Larger dots indicate those channels where a significant difference was measured (p < 0.001). Distribution of the 8–12 Hz band power difference shows significant positive peaks in the left frontal area (F7) and bilateral parietal regions (Cp1 and Cp2). Distribution of the 3–7 Hz band power shows a significant positive peak in the frontal area (F7), and several negative peaks in central and parietal areas.