| Literature DB >> 34975439 |
Xiulin Wang1,2, Wenya Liu2,3, Xiaoyu Wang2, Zhen Mu4, Jing Xu5, Yi Chang5, Qing Zhang1, Jianlin Wu1, Fengyu Cong2,3,6,7.
Abstract
Ongoing electroencephalography (EEG) signals are recorded as a mixture of stimulus-elicited EEG, spontaneous EEG and noises, which poses a huge challenge to current data analyzing techniques, especially when different groups of participants are expected to have common or highly correlated brain activities and some individual dynamics. In this study, we proposed a data-driven shared and unshared feature extraction framework based on nonnegative and coupled tensor factorization, which aims to conduct group-level analysis for the EEG signals from major depression disorder (MDD) patients and healthy controls (HC) when freely listening to music. Constrained tensor factorization not only preserves the multilinear structure of the data, but also considers the common and individual components between the data. The proposed framework, combined with music information retrieval, correlation analysis, and hierarchical clustering, facilitated the simultaneous extraction of shared and unshared spatio-temporal-spectral feature patterns between/in MDD and HC groups. Finally, we obtained two shared feature patterns between MDD and HC groups, and obtained totally three individual feature patterns from HC and MDD groups. The results showed that the MDD and HC groups triggered similar brain dynamics when listening to music, but at the same time, MDD patients also brought some changes in brain oscillatory network characteristics along with music perception. These changes may provide some basis for the clinical diagnosis and the treatment of MDD patients.Entities:
Keywords: CANDECOMP/PARAFAC; EEG; constrained tensor factorization; major depressive disorder; naturalistic music stimuli
Year: 2021 PMID: 34975439 PMCID: PMC8714749 DOI: 10.3389/fnhum.2021.799288
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
Basic information of the participants in HC and MDD groups.
|
|
|
| |
|---|---|---|---|
|
|
|
| |
| Age (years) | 38.4 ± 11.8 | 42.9 ± 11.0 | >0.05 |
| Gender (F:M) | 14:5 | 14:6 | >0.05 |
| Education (years) | 13.6 ± 3.8 | 12.8 ± 3.4 | >0.05 |
| Duration (months) | - | 12.8 ± 8.5 | - |
| HRSD | 2.4 ± 1.3 | 23.3 ± 3.6 | <0.01 |
| HAMA | 2.4 ± 1.3 | 19.2 ± 3.0 | <0.01 |
| MMSE | 28.2 ± 0.9 | 28.1 ± 1.1 | >0.05 |
The p-value is calculated via t-test.
The p-value is calculated via chi-squared test. Duration is the duration of illness.
HC, healthy controls; MDD, major depression disorder patients; F, Female; M, Male; NRSD, Hamilton Rating Scale for Depression; HAMA, Hamilton Anxiety Rating Scale; MMSE, Mini-Mental State Examination.
Figure 1Illustration of simulation generation and recovered results. (A) Simulated spatial, spectral, temporal and participant patterns (from top to bottom) for the two groups with partially coupled constraints in the first two components of spatial, spectral and temporal modes (see them in the 1st, 2nd, 4th, and 5th columns). (B) Reconstructed spatial, spectral, temporal and participant patterns (from top to bottom) using constrained tensor factorization methods.
Figure 2Illustration of the extracted component patterns (from left to right column: mean topography, overall power spectrum, music feature distribution and intra-cluster correlation maps) from HC and MDD EEG data via 50 runs of constrained tensor factorization method, and the parallel temporal component was significantly correlated with at least one of the musical features. (A) Common component patterns clustered from the shared components of 50 runs between HC and MDD data. (B) Individual component patterns clustered from the individual components of 50 runs in HC data. (C) Individual component patterns clustered from the individual components of 50 runs in MDD data. Md, Mode; KE, Key Clarity; FC, Fluctuation Centroid; FE, Fluctuation entropy; PC, Pulse Clarity.
NCTF-ADMM algorithm.