| Literature DB >> 36117630 |
Steffen Dasenbrock1,2, Sarah Blum2,3, Paul Maanen2,3, Stefan Debener2,3, Volker Hohmann1,2, Hendrik Kayser1,2.
Abstract
Recent advancements in neuroscientific research and miniaturized ear-electroencephalography (EEG) technologies have led to the idea of employing brain signals as additional input to hearing aid algorithms. The information acquired through EEG could potentially be used to control the audio signal processing of the hearing aid or to monitor communication-related physiological factors. In previous work, we implemented a research platform to develop methods that utilize EEG in combination with a hearing device. The setup combines currently available mobile EEG hardware and the so-called Portable Hearing Laboratory (PHL), which can fully replicate a complete hearing aid. Audio and EEG data are synchronized using the Lab Streaming Layer (LSL) framework. In this study, we evaluated the setup in three scenarios focusing particularly on the alignment of audio and EEG data. In Scenario I, we measured the latency between software event markers and actual audio playback of the PHL. In Scenario II, we measured the latency between an analog input signal and the sampled data stream of the EEG system. In Scenario III, we measured the latency in the whole setup as it would be used in a real EEG experiment. The results of Scenario I showed a jitter (standard deviation of trial latencies) of below 0.1 ms. The jitter in Scenarios II and III was around 3 ms in both cases. The results suggest that the increased jitter compared to Scenario I can be attributed to the EEG system. Overall, the findings show that the measurement setup can time-accurately present acoustic stimuli while generating LSL data streams over multiple hours of playback. Further, the setup can capture the audio and EEG LSL streams with sufficient temporal accuracy to extract event-related potentials from EEG signals. We conclude that our setup is suitable for studying closed-loop EEG & audio applications for future hearing aids.Entities:
Keywords: cEEGrid; ear-EEG; hearing aids; jitter; mobile EEG; neuro-steered hearing device; portable setup; timing
Year: 2022 PMID: 36117630 PMCID: PMC9475108 DOI: 10.3389/fnins.2022.904003
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 5.152
Figure 1Sketch of the hearing aid and EEG setup carried in a measurement. A strap connected to the back of the device is used to carry the Portable Hearing Laboratory (PHL) around the neck. Acoustic stimuli are presented via the hearing aids connected to the PHL. The around-the-ear EEG sensor cEEGrid is used to measure the neural activity. The cEEGrid is connected to the mobile wireless Smarting EEG amplifier. Both audio and EEG data streams are captured on the PHL for further processing and recording. Figure adapted from Dasenbrock et al. (2021).
Figure 2Photo of Portable Hearing Laboratory (PHL) and table with selection of hardware and software features. The hardware consists of a portable main unit and a binaural 4-microphone behind-the-ear (BTE) hearing aid headset. Photo adapted from Kayser et al. (2022).
Figure 3(A) System diagram of the measurement setup. The system diagram shows the signal flow and illustrates which components of the setup are responsible for the different signals and data streams and how they are related. Black lines refer to physical voltage signals; red lines refer to LSL streams. The setup combines the Portable Hearing Laboratory (PHL, left) and the EEG system (right). The PHL's function is described in the scheme of a sender-receiver architecture. The sender instance's (top left) role is to present acoustic stimuli audio out to the subject via the hearing aids. The physical voltage signal is measured at measurement point a. During playback of the stimuli, the sender instance simultaneously creates an audio event marker LSL stream that contains event markers indicating specific time points in the audio signal. The audio event marker LSL stream is measured at point d. Subsequently, the resulting EEG voltage signal EEG in (measurement point b) is amplified using the mobile EEG amplifier. The smartphone receives the EEG data via a Bluetooth connection and creates an LSL stream of the EEG data, i.e., EEG LSL stream (measurement point c). The receiver instance captures both the audio event marker LSL stream and the EEG LSL stream. (B) Timing diagrams for all three timing test scenarios. Timing diagrams relate two signals or streams in terms of time (x-axis). Square wave signals were used to test the timing in the setup. The time difference between two related time points is defined as trial latencies Δt, measured about every second. In timing test Scenario I (left) Δt was computed by comparing the rising edges in the voltage signal audio out (a) and the audio event marker LSL stream (d). In timing test Scenario II (center) Δt was computed by comparing the rising edges in the voltage signal EEG in (b) and the EEG LSL stream (c). In timing test Scenario III (right) Δt was computed by comparing the rising edges in the EEG LSL stream (c) and the audio event marker LSL stream (d).
Measurement results in terms of lag, jitter, and across-session range ΔR for all three timing test scenarios.
|
|
|
| |||||
|---|---|---|---|---|---|---|---|
|
|
|
| |||||
|
|
|
|
|
|
| ||
| 15 min | 1 | 32.69 | −1.56 | 29.75 | 0.07 | 3.06 | 1.22 |
| 2 | 32.76 | −53.07 | 24.92 | 0.09 | 3.91 | 1.5 | |
| 3 | 32.81 | −10.5 | 37.98 | 0.07 | 3.02 | 1.49 | |
| 4 | 32.68 | −1.98 | 29.8 | 0.09 | 2.05 | 1.41 | |
| 5 | 32.67 | −21.67 | 15.36 | 0.07 | 1.84 | 1.24 | |
| ΔR | 0.14 | 51.51 | 22.62 | 0.02 | 2.07 | 0.28 | |
| 3 h | 1 | 32.64 | −13.2 | 25.19 | 0.09 | 3.82 | 3.33 |
| 2 | 32.65 | −7.89 | 24.61 | 0.09 | 2.84 | 2.99 | |
| ΔR | 0.01 | 5.31 | 0.58 | 0 | 0.98 | 0.34 | |
ΔR refers to the difference between the maximum and the minimum value of lag and jitter across all timing tests within one scenario and duration. The columns labeled with Roman numerals belong to the respective timing test Scenarios I–III (see Sections 2.2.1–2.2.3). Five measurements were performed in the 15 min condition (top); two measurements were performed in the 3 h condition (bottom).
Figure 4Latency-recording time curves for all timing test scenarios. The different plots show the course of the measured latency (y-axis) in milliseconds over recording time (x-axis) in minutes (left) or hours (right). Five measurement runs were performed in the 15 min condition (left column); two measurement runs were performed in the 3 h condition (right column), denoted by different colors. The upper row shows the results for timing test Scenario I: sender instance timing (see Section 2.2.1); the middle row shows the results for timing test Scenario II: EEG system timing (see Section 2.2.2), and the bottom row shows the results for timing test Scenario III: In-the-loop timing (Section 2.2.3).
Figure 5Histogram of one of the two 3 h recordings showing the differences between clock correction offset values obtained using openMHA's receiver instance and LabRecorder. Outlier values outside of ± 10 ms were included in the outer 10 ms bins.