| Literature DB >> 30440332 |
Sina Miran, Sahar Akram, Alireza Sheikhattar, Jonathan Z Simon, Tao Zhang, Behtash Babadi.
Abstract
In a complex auditory scene comprising multiple sound sources, humans are able to target and track a single speaker. Recent studies have provided promising algorithms to decode the attentional state of a listener in a competing-speaker environment from non-invasive brain recordings sun exhibit poor performance at temporal resolutions suitable for real-time implementation, which hinders their utilization in emerging applications such as smart hearich as electroencephalography (EEG). These algorithms require substantial training datasets and ofteng aids. In this work, we propose a real-time attention decoding framework by integrating techniques from Bayesian filtering, $\ell_{1}$-regularization, state-space modeling, and Expectation Maximization, which is capable of producing robust and statistically interpretable measures of auditory attention at high temporal resolution. Application of our proposed algorithm to synthetic and real EEG data yields a performance close to the state-of-the-art offline methods, while operating in near real-time with a minimal amount of training data.Entities:
Mesh:
Year: 2018 PMID: 30440332 DOI: 10.1109/EMBC.2018.8512210
Source DB: PubMed Journal: Annu Int Conf IEEE Eng Med Biol Soc ISSN: 2375-7477