| Literature DB >> 29230046 |
Vasileios Baltatzis1, Kyriaki-Margarita Bintsi1, Georgios K Apostolidis1, Leontios J Hadjileontiadis2,3.
Abstract
Bullying is an everlasting phenomenon and the first, yet difficult, step towards the solution is its detection. Conventional approaches for bullying incidence identification include questionnaires, conversations and psychological tests. Here, unlike the conventional approaches, two experiments are proposed that involve visual stimuli with cases of bullying- and non-bullying- related ones, set within a 2D (simple video preview) and a Virtual Reality (VR) (immersive video preview) context. In both experimental settings, brain activity is recorded using high density (HD) (256 channels) electroencephalogram (EEG), and analyzed to identify the bullying stimuli type (bullying/non-bullying) and context (2D/VR). The proposed classification analysis uses a convolutional neural network (CNN), applying deep learning on the oscillatory modes (OCMs) embedded within the raw HD EEG data. The extraction of OCMs from the HD EEG data is achieved with swarm decomposition (SWD), which efficiently accounts for the non-stationarity and noise contamination of the raw HD EEG data. Experimental results from 17 subjects indicate that the new SWD/CNN approach achieves high discrimination accuracy (AUC = 0.987 between bullying/non-bullying stimuli type; AUC = 0.975, between bullying/non-bullying stimuli type and 2D/VR context), paving the way for better understanding of how brain's responses could act as indicators of bullying experience within immersive environments.Entities:
Mesh:
Year: 2017 PMID: 29230046 PMCID: PMC5725430 DOI: 10.1038/s41598-017-17562-0
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.379
Figure 1A block diagram of the proposed analysis that begins with the different kinds of stimulation (2D/VR) and the recording of the subject’s brain activity with HD EEG. The HD EEG data are then processed with SWD and the 256 HD EEG channels are spatially clustered into groups creating an image-like format. The latter are fed to a CNN, which performs a classification task in order to identify bullying (bul) and non-bullying (nobul) incidences and distinguish between the 2D and VR stimulation methods.
Confusion matrix for the two-class problem.
| Predicted Class | Total | |||
|---|---|---|---|---|
| Bul | NoBul | |||
| Actual Class | Bul | 47.2 | 5.9 | 53.1 |
| NoBul | 0.4 | 46.5 | 46.9 | |
| Total | 47.6 | 52.4 | 100 | |
The classes are Bul and NoBul. The values presented are percentages (%) of the test set and the whole test set comprised of 254 instances, 121 of which belonging to Bul class and 133 belonging to NoBul class. The corresponding classification metrics for the test set are: accuracy = 0.937, precision = 0.9403, recall = 0.9395, AUC = 0.9869.
Confusion matrix for the four-class problem.
| Predicted Class | Total | |||||
|---|---|---|---|---|---|---|
| BuL2D | NoBul2D | BulVR | NoBulVR | |||
| Actual Class | Bul2D | 16.9 | 4.7 | 0 | 0 | 21.6 |
| NoBul2D | 3.5 | 17.3 | 0 | 0 | 20.8 | |
| BulVR | 0 | 0 | 30.7 | 2 | 32.7 | |
| NoBulVR | 0 | 0 | 1.2 | 23.6 | 24.8 | |
| Total | 20.4 | 22 | 31.9 | 25.6 | 100 | |
The classes are Bul2D, NoBul2D, BulVR and NoBulVR. The values presented are percentages (%) of the test set and the whole test set comprised of 254 instances, 52 of which belonging to Bul2D class, 56 belonging to NoBul2D class, 81 belonging to BulVR class and 65 belonging to NoBulVR class. The corresponding classification metrics for the test set are: accuracy = 0.8858, precision = 0.8775, recall = 0.87475, AUC = 0.975.
Figure 2The first waveform is the initial signal, while the next three are the OCMs that the initial signal was decomposed in and the fifth waveform is the residual.
Figure 3The CNN structure that was used with τ = 128. The input data have 256 channels (rows) and 128-time samples (columns). Moreover, 30 8 × 8 convolutional filters are applied on them and the resulting dimensions are 249 × 121. A max-pooling layer that involves a 3 × 3 matrix along the time and space axes is applied afterwards, resulting in new data dimensions of 124 × 60. A fully-connected layer of 2 or 4 nodes (depending on the examined problem) follows, and finally logistic regression for the classification is performed. It is important to note that all filters and functions are applied on all 3 OCMs derived from the SWD of each signal and, thus, the third dimension of all data remains unaltered through the process.