| Literature DB >> 35957419 |
Manorot Borirakarawin1, Yunyong Punsawad1,2.
Abstract
Herein, we developed an auditory stimulus pattern for an event-related potential (ERP)-based brain-computer interface (BCI) system to improve control and communication in quadriplegia with visual impairment. Auditory stimulus paradigms for multicommand electroencephalogram (EEG)-based BCIs and audio stimulus patterns were examined. With the proposed auditory stimulation, using the selected Thai vowel, similar to the English vowel, and Thai numeral sounds, as simple target recognition, we explored the ERPs' response and classification efficiency from the suggested EEG channels. We also investigated the use of single and multi-loudspeakers for auditory stimuli. Four commands were created using the proposed paradigm. The experimental paradigm was designed to observe ERP responses and verify the proposed auditory stimulus pattern. The conventional classification method produced four commands using the proposed auditory stimulus pattern. The results established that the proposed auditory stimulation with 20 to 30 trials of stream stimuli could produce a prominent ERP response from Pz channels. The vowel stimuli could achieve higher accuracy than the proposed numeral stimuli for two auditory stimuli intervals (100 and 250 ms). Additionally, multi-loudspeaker patterns through vowel and numeral sound stimulation provided an accuracy greater than 85% of the average accuracy. Thus, the proposed auditory stimulation patterns can be implemented as a real-time BCI system to aid in the daily activities of quadratic patients with visual and tactile impairments. In future, practical use of the auditory ERP-based BCI system will be demonstrated and verified in an actual scenario.Entities:
Keywords: auditory stimulation; brain–computer interface; electroencephalography; event-related potential (ERP)
Mesh:
Year: 2022 PMID: 35957419 PMCID: PMC9371073 DOI: 10.3390/s22155864
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Research studies on the auditory stimulus for event related potential (ERP)-based brain–computer interface (BCI).
| Author | Proposed Method | Auditory Stimuli | Speakers | Electrodes | Result(s) |
|---|---|---|---|---|---|
| Pokorny et al., 2013 [ | The auditory P300-based single-switch brain–computer interface |
Low tones (396 Hz) and the deviants (297 Hz) in 300 ms High tones (1900 Hz), and the deviants (2640 Hz) in 600 ms | Earphones | F3, Fz, F4, C3, Cz, C4, P3, Pz, P4 | The mean classification accuracies lie between 69% and 83.5% for a stepwise linear discriminant analysis classifier for 12 Minimally conscious state patients |
| Cai et al., 2015 [ | Spatial Auditory Brain–Computer Interface | The white- and pink-noise stimuli of 30 ms lengths with 5 ms | Eight loud speakers | P3, P4, P5, P6, Cz, | The proposed model can boost the results by up to +10.43 bit/min (+44% classification accuracy). |
| Matsumoto et al., 2013 | P300 Responses to Various Vowel Stimuli for an Auditory BCI | Vowels stimuli | Inner ear | Cz, CPz, POz, Pz, P1, P2, C3, C4, O1, O2, T7, T8, P3, P4, F3, F4 | Classification accuracies between 60% and 100% |
| Halder et al., 2016 [ | Auditory P300 BCIs via Hiragana syllabary | 46 Hiragana characters with a two-step procedure: First the consonant (ten choices) and then the vowel (five choices) | Earphones | AF7, FPz, AF8, F3, Fz, F4, C3, Cz, C4, CP3, CPz, CP4, P4, Pz, P4, POz | The mean classification accuracy is above 70% |
| Onishi et al., 2017 [ | Affective Stimuli for an Auditory P300 BCI | Positive and negative affective sounds | Earphones | C3, Cz, C4, P3, Pz, P4, O1, O2 | ALS patients achieved 90% online classification accuracy |
| Simon et al., 2015 [ | An auditory multi-class BCI for a speller | Natural stimuli from five animal voices i.e., duck, bird, frog, seagull, and dove. Each tone had a length of 150 ms and ISI of 250 ms. | Headphones | F3, Fz, F4, C5, C3, C1, Cz, C2, C4, C6, CP5, CP3, CP1, CPz, CP2, CP4, CP6, P3, P1, Pz, P2, P4, PO7, PO3, POz, PO4, PO8, Oz. | Healthy participants achieved an average accuracy of 90%. The average accuracy of ALS patients was 47%. |
| Kaongoen & Jo, 2017 [ | ASSR/P300-hybrid | Two sounds were set to have 1 kHz and 2 kHz pitches with 37 Hz and 43 Hz AM frequencies, | Earphones | Fz, Cz, T3, T4, Pz, P3, P4, Oz | The maximum average classification accuracy for each setting are P300: 98.38%, and ASSR: |
Figure 1(a) Components of the proposed auditory BCI system. (b) Electrode placement for 19 channels based on the 10–20 system.
Figure 2Two patterns of the loudspeaker position settings for auditory stimulation. (a) Single-loudspeaker position setting. (b) Four loudspeaker position settings in front of the participant.
Vowel and numeral sounds for auditory-evoked potential stimulation.
| Sound Stimulus | Vowels | Numerals | ||
|---|---|---|---|---|
| Thai | English | Thai | English | |
|
| เอ | a | หนึ่ง | neung |
|
| อี | e | สอง | søng |
|
| ไอ | i | สาม | sām |
|
| โอ | o | สี่ | sī |
Figure 3Experimental paradigm of the auditory stimulus to investigate the proposed auditory stimuli and the pattern of loudspeaker position settings. (Auditory stimulus intervals (ASI): 100 and 250 ms).
Target of the auditory ERP components using stimuli as vowel sounds with a duration of 100 and 250 ms.
| Session | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
| a | i | e | o | i | i | o | e | e | i | o | a |
Target of the auditory ERP components using stimuli as numeral sounds with a duration of 100 and 250 ms.
| Session | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
|
| neung | sām | søng | sī | sām | neung | sī | søng | søng | sām | sī | neung |
Figure 4Experimental setup of the auditory stimuli evoked EEG potentials. (a) Auditory stimulus with a single loudspeaker. (b) Auditory stimulus with multi loudspeakers.
Figure 5Example of grand−averaged ERPs obtained from all subjects over the auditory stimulus interval 250 ms. (a) Grand−averaged ERPs from the auditory stimulus based on the sounds of numerals produced via a single loudspeaker. (b) Grand−averaged ERPs from the auditory stimulus based on the sound of numerals produced via multiple loudspeakers.
Figure 6Average classification accuracy corresponding to different positions of electrodes of the proposed auditory ERP stimulation using SWLDA. Confidence interval (Alpha: 0.05).
Figure 7Average classification accuracy between 100 ms and 250 ms in the auditory stimulus intervals. Confidence interval (Alpha: 0.01).
Results of the average classification accuracy of different stimuli intervals (tone duration) of vowels and numerals for each participant.
| Auditory Stimuli | Average Classification Accuracy (%) | |||
|---|---|---|---|---|
| Vowels | Numerals | |||
| Participants | ASI: 100 ms | ASI: 250 ms | ASI: 100 ms | ASI: 250 ms |
| 1 | 88.8 | 87.5 | 80.0 | 88.5 |
| 2 | 90.3 | 86.3 | 84.5 | 88.0 |
| 3 | 90.0 | 84.8 | 79.5 | 84.5 |
| 4 | 90.8 | 86.0 | 72.5 | 82.5 |
| 5 | 85.0 | 83.8 | 73.5 | 81.7 |
| 6 | 92.5 | 86.5 | 77.5 | 85.0 |
| 7 | 90.0 | 90.5 | 77.5 | 87.5 |
| 8 | 87.5 | 87.5 | 72.5 | 77.5 |
| 9 | 91.5 | 82.5 | 80.0 | 82.8 |
| 10 | 85.4 | 88.9 | 69.4 | 89.0 |
| 11 | 83.5 | 83.5 | 82.5 | 78.5 |
| 12 | 86.5 | 82.5 | 80.3 | 80.0 |
| Mean ± SD. | 88.5 ± 2.86 | 85.8 ± 2.53 | 77.5 ± 4.57 | 83.8 ± 3.95 |
Figure 8Average classification accuracy between vowels and numeral sounds in auditory stimulus intervals of 100 ms and 250 ms.
Results of the average classification accuracy under different loudspeaker patterns of vowel and numeral stimuli for each participant.
| Loudspeaker Patterns | Average Classification Accuracy (%) | |||
|---|---|---|---|---|
| Single | Multi | |||
| Participants | Vowels | Numerals | Vowels | Numerals |
| 1 | 87.5 | 75.0 | 88.8 | 93.5 |
| 2 | 82.8 | 84.5 | 93.8 | 88.0 |
| 3 | 93.5 | 80.0 | 81.3 | 84.0 |
| 4 | 82.3 | 72.5 | 94.5 | 82.5 |
| 5 | 85.0 | 71.7 | 83.8 | 83.5 |
| 6 | 91.5 | 80.0 | 87.5 | 82.5 |
| 7 | 97.5 | 70.0 | 90.0 | 95.0 |
| 8 | 87.5 | 67.5 | 87.5 | 82.5 |
| 9 | 86.5 | 77.8 | 87.5 | 85.0 |
| 10 | 85.5 | 71.5 | 88.8 | 86.9 |
| 11 | 71.0 | 77.5 | 86.0 | 83.5 |
| 12 | 81.5 | 72.8 | 77.5 | 87.5 |
| Mean ± SD. | 86.0 ± 6.71 | 75.1 ± 4.95 | 87.2 ± 4.79 | 86.2 ± 4.24 |
Figure 9Average classification accuracy between vowel and numeral sounds of the auditory ERP stimulus of single and multi-loudspeaker patterns.