| Literature DB >> 34084940 |
Mariana R F Mota1, Pedro H L Silva1, Eduardo J S Luz1, Gladston J P Moreira1, Thiago Schons1, Lauro A G Moraes1, David Menotti2.
Abstract
Due to the application of vital signs in expert systems, new approaches have emerged, and vital signals have been gaining space in biometrics. One of these signals is the electroencephalogram (EEG). The motor task in which a subject is doing, or even thinking, influences the pattern of brain waves and disturb the signal acquired. In this work, biometrics with the EEG signal from a cross-task perspective are explored. Based on deep convolutional networks (CNN) and Squeeze-and-Excitation Blocks, a novel method is developed to produce a deep EEG signal descriptor to assess the impact of the motor task in EEG signal on biometric verification. The Physionet EEG Motor Movement/Imagery Dataset is used here for method evaluation, which has 64 EEG channels from 109 subjects performing different tasks. Since the volume of data provided by the dataset is not large enough to effectively train a Deep CNN model, it is also proposed a data augmentation technique to achieve better performance. An evaluation protocol is proposed to assess the robustness regarding the number of EEG channels and also to enforce train and test sets without individual overlapping. A new state-of-the-art result is achieved for the cross-task scenario (EER of 0.1%) and the Squeeze-and-Excitation based networks overcome the simple CNN architecture in three out of four cross-individual scenarios.Entities:
Keywords: Biometric; CNN; Data augmentation; Electroencephalogram; Multi-task
Year: 2021 PMID: 34084940 PMCID: PMC8157223 DOI: 10.7717/peerj-cs.549
Source DB: PubMed Journal: PeerJ Comput Sci ISSN: 2376-5992
Figure 1Example of an EEG headband with few electrodes and no need for conductor gel.
Related works for EEG-based biometric.
| Work | Database | Classes | Acquisition | Approach | Channels | Result |
|---|---|---|---|---|---|---|
| Physionet | 109 | Motor/Imaginary tasks | Magnitude Squared Coherence | 64 | Acc = 100% | |
| Physionet | 109 | Rest | Eigenvector | 64 | EER = 4.4% | |
| Physionet | 10 | Rest | CNN | 64 | Acc = 88% | |
| Own | 40 | Visual stimuli | CNN | 17 | CRR = 98.8% | |
| Own | 40 | Imaginary arms/legs movement | CNN | 17 | CRR = 93% | |
| BCIT | 100 | Driving car | CNN | 64 | CRR = 97% | |
| Own | 45 | Sit | Hidden Markov models (HMMs) | 9 | EER < 2% | |
| Physionet | 109 | Motor/Imaginary tasks | GCNN + PLV | 64 | CRR = 99.98% FAR = 1.65% | |
| Physionet | 108 | Motor/Imaginary tasks | Discrete Wavelet Transform + LDA | 9 | Acc = 100% EER = 2.63% | |
| Physionet | 109 | Motor/Imaginary tasks | FPA + | 35 | Acc = 96.05% | |
| Physionet | 109 | Motor/Imaginary tasks | 1D-Conv. LSTM | 16 | EER = 0.41% | |
| SSVEP database | 14 | Visual stimulus | CNN | 8 | Verification Acc = 98.34% |
Figure 2Proposed pipeline.
Figure 3Example of the proposed data augmentation based on overlapping with a step of two-second size.
Each slide of the window produces a new training instance.
Figure 4Data split proposed.
Figure 5Architecture evaluated in this work.
(A) Architecture proposed in which conv1, conv2 and conv3 have filters size equal to 11 × 1, 9 × 1 and 9 × 1, respectively, and stride equal to 1. Pool1 with stride equal to 4 and pool2 and pool3 with stride equal to 2 and filter size equal to 2 for all three. The padding is equal to zero for all convolutional and pooling layers. (B) Architecture proposed in which conv1, conv2, conv3, conv4 and conv5 have filters size equal to 51 × 1, 17 × 1, 7 × 1, 7 × 1 and 7 × 1, respectively, and stride equal to 1 all five. Pool1, pool2, pool3, pool4 and pool5 are max pooling with filter size and stride equal to 2 for all pooling layers. The padding is equal to zero for all convolutional and pooling layers.
EER and decidability reported of the two proposed architectures. EER presented in percentage.
| Frequency band | EER (%) | Decidability (%) | Architecture |
|---|---|---|---|
| 10–30 | 5.06 | 3.22 | |
| 30–50 | 0.19 | 7.02 | A |
| 01–50 | 9.73 | 2.50 | |
| 10–30 | 6.85 | 2.84 | |
| 30–50 | 0.65 | 3.61 | B |
| 01–50 | 9.64 | 2.20 |
EER reported on EO-EC. EER presented in percentage. #Different evaluation protocol.
| Reports | Approach | EER (%) |
|---|---|---|
| Eigenvector Centrality | 4.40 | |
| CNN + LSTM | 0.41 | |
| Proposed Method | CNN | 0.19 |
Figure 6DET curves comparing all spectrum evaluated with Architecture A.
EER obtained for different strides. EER presented in percentage.
| Stride | 20 | 40 | 60 | 80 | 100 | 120 | 140 | 160 | 180 | 200 |
|---|---|---|---|---|---|---|---|---|---|---|
| EER | 0.09 | 0.09 | 0.37 | 0.76 | 0.65 | 0.65 | 0.74 | 0.83 | 0.37 | 1.11 |
Figure 7Selected electrodes on motor cortex, frontal and occipital lobule.
Based on Yang, Deravi & Hoque (2018).
EER reported of mismatch training/testing. EER presented in percentage.
| Test | T1R2 | T2R2 | T3R2 | T4R2 | EO | EC |
|---|---|---|---|---|---|---|
| Train | ||||||
| T1R1+T1R3 | 0.12 | 0.29 | 0.42 | 0.42 | 0.10 | 0.37 |
| T2R1+T2R3 | 0.19 | 0.29 | 0.56 | 0.69 | 0.08 | 0.56 |
| T3R1+T3R3 | 0.21 | 0.19 | 0.29 | 0.19 | 0.18 | 0.36 |
| T4R1+T4R3 | 0.12 | 0,13 | 0.27 | 0.27 | 0.20 | 0.36 |
EER obtained by accumulating tasks/runs. EER presented in percentage.
| Protocol | Train | Test | EER (%) |
|---|---|---|---|
| P3.1 | T1R1 | T1R2 | 0.10 |
| P3.2 | T1R3 | T1R2 | 0.44 |
| P3.3 | T1R1, T1R3 | T1R2 | 0.22 |
| P3.4 | T2R1, T2R2, T2R3 | T1R2 | 0.25 |
| P3.5 | T1R1, T1R3, T2R1 | T1R2 | 1.25 |
| P3.6 | T1R1, T2R1, T2R2, T2R3 | T1R2 | 1.14 |
| P3.7 | T1R1, T1R3, T2R1, T2R2 | T1R2 | 1.29 |
| P3.8 | T1R1, T1R3, T2R1, T2R2, T2R3 | T1R2 | 1.27 |
| P3.9 | T1R1, T1R3, T2R1, T2R2, T2R3, T3R1 | T1R2 | 1.58 |
| P3.10 | T1R1, T1R3, T2R1, T2R2, T2R3, T3R1, T4R1 | T1R2 | 1.47 |
| P3.11 | T1R1, T1R3, T2R1, T2R2, T2R3, T3R1, T4R1, T4R2 | T1R2 | 1.91 |
EER reported on T1R2. Both metrics are presented in percentages.
| Reports | Train | Test | EER (%) |
|---|---|---|---|
| T1 & T2 | T1R2 | 2.63 | |
| Proposed Approach | T1 & T2 | T1R2 | 0.27 |
EER reported on T1R2. EER presented in percentage.
| Test | Train | EER (%) | EER (%) |
|---|---|---|---|
| T1R1 | 0.21 | 0.17 | |
| T2R1 | 0.25 | 0.15 | |
| T3R1 | 0.10 | 0.19 | |
| T4R1 | 0.09 | 0.19 | |
| T2R2 | 0.15 | 0.17 | |
| T1R2 | T3R2 | 0.12 | 0.25 |
| T4R2 | 0.39 | 0.26 | |
| T1R3 | 0.06 | 0.17 | |
| T2R3 | 0.12 | 0.04 | |
| T3R3 | 0.08 | 0.02 | |
| T4R3 | 0.27 | 0.06 |
Results obtained with SE Model 2 with r = 2 (SE2r2) and SE Model 3 with r = 32 (SE3r32) both trained with signs of half of the individuals.
| Train | Test | CNN Arch. A | SE2r2 EER (%) | SE3r32 EER (%) |
|---|---|---|---|---|
| 0.55 | 0.18 | 0.92 | ||
| 0.93 | 0.51 | 0.39 | ||
| 0.40 | 0.41 | 0.55 | ||
| 0.97 | 0.36 | 1.11 |