| Literature DB >> 35173597 |
Lina Qiu1, Yongshi Zhong1, Qiuyou Xie2, Zhipeng He1, Xiaoyun Wang3, Yingyue Chen3, Chang'an A Zhan4, Jiahui Pan1.
Abstract
Music can effectively improve people's emotions, and has now become an effective auxiliary treatment method in modern medicine. With the rapid development of neuroimaging, the relationship between music and brain function has attracted much attention. In this study, we proposed an integrated framework of multi-modal electroencephalogram (EEG) and functional near infrared spectroscopy (fNIRS) from data collection to data analysis to explore the effects of music (especially personal preferred music) on brain activity. During the experiment, each subject was listening to two different kinds of music, namely personal preferred music and neutral music. In analyzing the synchronization signals of EEG and fNIRS, we found that music promotes the activity of the brain (especially the prefrontal lobe), and the activation induced by preferred music is stronger than that of neutral music. For the multi-modal features of EEG and fNIRS, we proposed an improved Normalized-ReliefF method to fuse and optimize them and found that it can effectively improve the accuracy of distinguishing between the brain activity evoked by preferred music and neutral music (up to 98.38%). Our work provides an objective reference based on neuroimaging for the research and application of personalized music therapy.Entities:
Keywords: brain activity; electroencephalogram (EEG); functional near-infrared spectroscopy (fNIRS); multi-modality; preferred music
Year: 2022 PMID: 35173597 PMCID: PMC8841473 DOI: 10.3389/fnbot.2022.823435
Source DB: PubMed Journal: Front Neurorobot ISSN: 1662-5218 Impact factor: 2.650
Figure 1The paradigm design of this experiment.
Figure 2Acquisition equipment and measuring cap. (A) View of the acquisition equipment; (B) Acquisition amplifier of EEG; (C) Laser light source of fNIRS; (D) View of measuring cap; and (E) Channel location of EEG and fNIRS. Where the red circles represent the 16 light sources of fNIRS, the green circles are 15 detesctors of fNIRS, the blue lines are 44 channels of fNIRS, and the gray circles represent 32 EEG electrodes.
Figure 3The overall framework of EEG-fNIRS multi-modal integration.
Figure 4Averaged DE distributions of all nine subjects in the five frequency bands induced by personal preferred music (A) and neutral music (B), and their DE difference distribution [i.e., personal preference minus neutral music, as shown in (C)] in the five frequency bands. Where δ: 0.5–3 Hz; θ: 4–7 Hz; α: 8–13 Hz; β: 14–30 Hz; γ: 30–50 Hz; All (0.5–50 Hz).
Figure 5HbO of contrasts of (A) personal preferred music stimulus vs. baseline, (B) neutral music stimulus vs. baseline, and (C) personal preferred music vs. neutral music. The area shown in the figure represents a statistically significant difference (p < 0.05). Where the blue circles represent the light sources of fNIRS and the green circles are the detectors of fNIRS. The darker the red, the more significant the difference and the smaller the p-value.
Classification accuracy of different classifiers based on EEG different features.
|
|
|
|
|
|
|
| |
|---|---|---|---|---|---|---|---|
| SVM | 93.94% | 94.32% | 96.60% | 49.06% | 83.12% | 97.17% | 85.70% |
| KNN | 80.59% | 86.49% | 91.00% | 52.01% | 70.34% | 88.01% | 78.07% |
| Random forest | 92.12% | 94.91% | 95.69% | 52.17% | 87.41% | 97.63% | 86.66% |
| AdaBoosting | 92.12% | 94.91% | 96.35% | 50.77% | 86.55% | 97.94% | 86.44% |
| Naive bayesian | 78.49% | 70.71% | 80.81% | 51.67% | 82.01% | 84.78% | 74.75% |
| DAC | 91.29% | 90.11% | 92.42% | 49.12% | 85.35% | 95.14% | 83.91% |
| Averaged accuracy | 88.09% | 88.58% | 92.14% | 50.80% | 82.4%7 | 93.45% |
Classification accuracy of different classifiers based on fNIRS different features.
|
|
|
|
|
| |
|---|---|---|---|---|---|
| SVM | 80.91% | 55.64% | 83.35% | 54.46% | 68.59% |
| KNN | 88.14% | 54.46% | 91.39% | 58.25% | 73.06% |
| Random forest | 88.86% | 53.82% | 91.19% | 55.31% | 72.30% |
| AdaBoosting | 88.82% | 57.67% | 91.65% | 53.06% | 72.80% |
| Naive bayesian | 67.31% | 63.33% | 69.21% | 55.98% | 63.96% |
| DAC | 79.85% | 55.78% | 82.06% | 53.67% | 67.84% |
| Averaged accuracy | 82.32% | 56.78% | 84.81% | 55.12% |
Classification accuracy of different classifiers based on the EEG and fNIRS fusion feature.
|
|
|
|
| |
|---|---|---|---|---|
| SVM | 87.24% | 96.84% | 92.81% | 97.72% |
| KNN | 72.38% | 91.55% | 74.61% | 92.12% |
| Random forest | 91.63% | 97.87% | 98.04% | 98.38% |
| AdaBoosting | 90.22% | 94.41% | 94.39% | 95.79% |
| Naive bayesian | 79.43% | 79.91% | 80.83% | 82.90% |
| DAC | 90.62% | 95.26% | 95.72% | 96.79% |
| Accuracy averaged | 85.25% | 92.64% | 89.40% | 93.68% |