| Literature DB >> 35087580 |
Xiyu Song1, Ying Zeng1,2, Li Tong1, Jun Shu1, Qiang Yang3, Jian Kou4, Minghua Sun5, Bin Yan1.
Abstract
The superiority of collaborative brain-computer interface (cBCI) in performance enhancement makes it an effective way to break through the performance bottleneck of the BCI-based dynamic visual target detection. However, the existing cBCIs focus on multi-mind information fusion with a static and unidirectional mode, lacking the information interaction and learning guidance among multiple agents. Here, we propose a novel cBCI framework to enhance the group detection performance of dynamic visual targets. Specifically, a mutual learning domain adaptation network (MLDANet) with information interaction, dynamic learning, and individual transferring abilities is developed as the core of the cBCI framework. MLDANet takes P3-sSDA network as individual network unit, introduces mutual learning strategy, and establishes a dynamic interactive learning mechanism between individual networks and collaborative decision-making at the neural decision level. The results indicate that the proposed MLDANet-cBCI framework can achieve the best group detection performance, and the mutual learning strategy can improve the detection ability of individual networks. In MLDANet-cBCI, the F1 scores of collaborative detection and individual network are 0.12 and 0.19 higher than those in the multi-classifier cBCI, respectively, when three minds collaborate. Thus, the proposed framework breaks through the traditional multi-mind collaborative mode and exhibits a superior group detection performance of dynamic visual targets, which is also of great significance for the practical application of multi-mind collaboration.Entities:
Mesh:
Year: 2022 PMID: 35087580 PMCID: PMC8789438 DOI: 10.1155/2022/4752450
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Collaborative BCI framework for detecting dynamic visual targets.
Figure 2Experimental paradigm for vehicle detection in UAV video.
Figure 3The EEG acquisition environment for the multi-mind synchronous experiment.
Figure 4The architecture of MLDANet for group detection.
Figure 5P3-sSDA network architecture.
Figure 6The averaged P3 map of two groups: (a) strong P3 map; (b) weak P3 map.
Figure 7The averaged ERP responses of source domain individuals.
Figure 8Summary of different BCI frameworks: (a) sBCI framework; (b) SC-cBCI framework; (c) MC-cBCI framework; (d) MLDANet-cBCI framework.
Network parameter settings.
| Parameters | Value | ||
|---|---|---|---|
| sBCI/SC-cBCI | MC-cBCI | ML-cBCI | |
| P3-sSDA network | 1 | 3 | 3 |
| Batch size | 40 | 40 | 40 |
| Learning rate | 0.0003 | 0.0003 | 0.0003 |
| Epoch | 100 | 100 | 100 |
|
| 0.2 | 0.2 | 0.4 |
|
| 0.8 | 0.8 | 0.8 |
|
| 0.2 | 0.2 | 0.2 |
Detection performances on different BCI frameworks (p < 0.01).
| BCI frameworks | Accuracy | Hit rate | False alarm rate | F1 score |
|---|---|---|---|---|
| sBCI | 0.77 | 0.63 | 0.20 | 0.47( |
| SC-cBCI | 0.82 | 0.80 | 0.18 | 0.59( |
| MC-cBCI | 0.86 | 0.69 | 0.11 | 0.61( |
| MLDANet-cBCI | 0.91 | 0.72 | 0.05 | 0.73 |
Figure 9The model convergence in the MLDANet-cBCI framework: (a) the convergence of training loss; (b) the convergence of F1 score.
Detection performance of the individual network in the MC-cBCI and MLDANet-cBCI frameworks.
| Groups | Individual F1 score in the MC-cBCI framework | Individual F1 score in the MLDANet-cBCI framework | ||||
|---|---|---|---|---|---|---|
| Participant 1 | Participant 2 | Participant 3 | Participant 1 | Participant 2 | Participant 3 | |
| Group 1 | 0.55 | 0.37 | 0.53 | 0.71 | 0.63 | 0.65 |
| Group 2 | 0.43 | 0.48 | 0.37 | 0.56 | 0.61 | 0.53 |
| Group 3 | 0.42 | 0.54 | 0.44 | 0.33 | 0.60 | 0.71 |
| Group 4 | 0.40 | 0.49 | 0.52 | 0.63 | 0.69 | 0.67 |
| Group 5 | 0.43 | 0.56 | 0.45 | 0.71 | 0.76 | 0.76 |
| Group 6 | 0.45 | 0.48 | 0.50 | 0.70 | 0.72 | 0.74 |
| Group 7 | 0.48 | 0.41 | 0.44 | 0.55 | 0.55 | 0.60 |
| Group 8 | 0.49 | 0.40 | 0.50 | 0.69 | 0.66 | 0.68 |
| Group 9 | 0.38 | 0.55 | 0.47 | 0.59 | 0.74 | 0.65 |
| Group 10 | 0.33 | 0.53 | 0.57 | 0.45 | 0.78 | 0.74 |
| Group 11 | 0.32 | 0.60 | 0.52 | 0.41 | 0.79 | 0.70 |
| Group 12 | 0.50 | 0.61 | 0.32 | 0.70 | 0.71 | 0.52 |
| Group 13 | 0.49 | 0.41 | 0.61 | 0.72 | 0.73 | 0.73 |
| Group 14 | 0.33 | 0.51 | 0.50 | 0.54 | 0.71 | 0.70 |
| Group 15 | 0.49 | 0.35 | 0.50 | 0.69 | 0.53 | 0.69 |
| Group 16 | 0.48 | 0.33 | 0.63 | 0.68 | 0.61 | 0.86 |
| Group 17 | 0.53 | 0.64 | 0.41 | 0.74 | 0.76 | 0.62 |
| Group 18 | 0.51 | 0.52 | 0.39 | 0.73 | 0.75 | 0.70 |
| Group 19 | 0.33 | 0.60 | 0.44 | 0.52 | 0.75 | 0.65 |
| Group 20 | 0.54 | 0.37 | 0.42 | 0.75 | 0.63 | 0.62 |
|
|
|
| ||||
Figure 10Detection performances with different number of individuals in the source domain.