Teng Ma1, Hui Li1, Hao Yang1, Xulin Lv1, Peiyang Li1, Tiejun Liu2, Dezhong Yao3, Peng Xu4. 1. Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China. 2. Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China; Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 610054, China. 3. Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China; Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 610054, China. Electronic address: dyao@uestc.edu.cn. 4. Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, 610054, China; Center for Information in BioMedicine, University of Electronic Science and Technology of China, Chengdu, 610054, China. Electronic address: xupeng@uestc.edu.cn.
Abstract
BACKGROUND: Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. NEW METHOD: In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. RESULTS: The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. COMPARISON WITH EXISTING METHODS: Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. CONCLUSIONS: According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright Â
BACKGROUND: Motion-onset visual evoked potentials (mVEP) can provide a softer stimulus with reduced fatigue, and it has potential applications for brain computer interface(BCI)systems. However, the mVEP waveform is seriously masked in the strong background EEG activities, and an effective approach is needed to extract the corresponding mVEP features to perform task recognition for BCI control. NEW METHOD: In the current study, we combine deep learning with compressed sensing to mine discriminative mVEP information to improve the mVEP BCI performance. RESULTS: The deep learning and compressed sensing approach can generate the multi-modality features which can effectively improve the BCI performance with approximately 3.5% accuracy incensement over all 11 subjects and is more effective for those subjects with relatively poor performance when using the conventional features. COMPARISON WITH EXISTING METHODS: Compared with the conventional amplitude-based mVEP feature extraction approach, the deep learning and compressed sensing approach has a higher classification accuracy and is more effective for subjects with relatively poor performance. CONCLUSIONS: According to the results, the deep learning and compressed sensing approach is more effective for extracting the mVEP feature to construct the corresponding BCI system, and the proposed feature extraction framework is easy to extend to other types of BCIs, such as motor imagery (MI), steady-state visual evoked potential (SSVEP)and P300. Copyright Â
Authors: Alexander E Hramov; Vladimir A Maksimenko; Svetlana V Pchelintseva; Anastasiya E Runnova; Vadim V Grubov; Vyacheslav Yu Musatov; Maksim O Zhuravlev; Alexey A Koronovskii; Alexander N Pisarchik Journal: Front Neurosci Date: 2017-12-04 Impact factor: 4.677
Authors: Muhammad Tayyib; Muhammad Amir; Umer Javed; M Waseem Akram; Mussyab Yousufi; Ijaz M Qureshi; Suheel Abdullah; Hayat Ullah Journal: PLoS One Date: 2020-01-07 Impact factor: 3.240