Literature DB >> 27845150

The extraction of motion-onset VEP BCI features based on deep learning and compressed sensing.

Teng Ma1, Hui Li1, Hao Yang1, Xulin Lv1, Peiyang Li1, Tiejun Liu2, Dezhong Yao3, Peng Xu4.   

Abstract

BACKGROUND: Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. NEW
METHOD: In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance.
RESULTS: The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. COMPARISON WITH EXISTING
METHODS: Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance.
CONCLUSIONS: According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright Â
© 2016 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Brain computer interface; Compressed sensing; Deep learning; Motion-onset VEP; Multi-modality feature

Mesh:

Year:  2016        PMID: 27845150     DOI: 10.1016/j.jneumeth.2016.11.002

Source DB:  PubMed          Journal:  J Neurosci Methods        ISSN: 0165-0270            Impact factor:   2.390


  5 in total

1.  Classifying the Perceptual Interpretations of a Bistable Image Using EEG and Artificial Neural Networks.

Authors:  Alexander E Hramov; Vladimir A Maksimenko; Svetlana V Pchelintseva; Anastasiya E Runnova; Vadim V Grubov; Vyacheslav Yu Musatov; Maksim O Zhuravlev; Alexey A Koronovskii; Alexander N Pisarchik
Journal:  Front Neurosci       Date:  2017-12-04       Impact factor: 4.677

2.  Deep Learning Convolutional Neural Networks Discriminate Adult ADHD From Healthy Individuals on the Basis of Event-Related Spectral EEG.

Authors:  Laura Dubreuil-Vall; Giulio Ruffini; Joan A Camprodon
Journal:  Front Neurosci       Date:  2020-04-09       Impact factor: 4.677

3.  Accelerated sparsity based reconstruction of compressively sensed multichannel EEG signals.

Authors:  Muhammad Tayyib; Muhammad Amir; Umer Javed; M Waseem Akram; Mussyab Yousufi; Ijaz M Qureshi; Suheel Abdullah; Hayat Ullah
Journal:  PLoS One       Date:  2020-01-07       Impact factor: 3.240

4.  Highly Interactive Brain-Computer Interface Based on Flicker-Free Steady-State Motion Visual Evoked Potential.

Authors:  Chengcheng Han; Guanghua Xu; Jun Xie; Chaoyang Chen; Sicong Zhang
Journal:  Sci Rep       Date:  2018-04-11       Impact factor: 4.379

5.  Deep Learning for Automatically Visual Evoked Potential Classification During Surgical Decompression of Sellar Region Tumors.

Authors:  Nidan Qiao; Mengju Song; Zhao Ye; Wenqiang He; Zengyi Ma; Yongfei Wang; Yuyan Zhang; Xuefei Shou
Journal:  Transl Vis Sci Technol       Date:  2019-11-20       Impact factor: 3.283

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.