| Literature DB >> 32879624 |
Yuying Tong1, Gang Zhao2, Jinbo Zhao1, Nianxiang Xie1, Dong Han3,4, Bowen Yang1, Qi Liu1, Hailian Sun5, Yanjie Yang4.
Abstract
We explored the face classification processing mechanism in depressed patients, especially the biases of happy faces in face classification processing of depression. Thirty patients with the first episode of depression at the First Affiliated Hospital of Harbin Medical University were selected as the depression group, while healthy people matched for age, gender, and educational level were assigned to the control group. The Hamilton Depression Scale and Hamilton Anxiety Scale were used to select the subjects; then, we used the forced face classification paradigm to collect behavioral (response time and accuracy) and event-related potential (ERP) data of the subjects. The differences between the groups were estimated using a repeated measurement analysis of variance. The total response time of classified faces in the depression group was longer than that in the control group, the correct rate was lower, and the difference was statistically significant (P < 0.05). N170 component analysis demonstrated that the latency of the depression group was prolonged, and the difference was statistically significant (P < 0.05). When classifying happy faces, the depressed patients demonstrated a decrease in N170 amplitude and a prolongation of latency in some brain regions compared with the healthy individuals. The cognitive bias in depression may be due to prolonged processing of positive facial information and difficulty in producing positive emotional responses.Entities:
Mesh:
Year: 2020 PMID: 32879624 PMCID: PMC7448107 DOI: 10.1155/2020/7235734
Source DB: PubMed Journal: Neural Plast ISSN: 1687-5443 Impact factor: 3.599
Figure 1Example of the happy (a), neutral (b), and sad (c) faces used in the experiment.
Comparison of demographic data between depression and control groups.
| Variables | Depression group, | Control group, |
|
|---|---|---|---|
| Mean ± SD | Mean ± SD | ||
| Age | 44.88 ± 13.28 | 46.60 ± 9.41 | 0.217 |
| Depression score | 22.75 ± 3.10 | 3.38 ± 1.09 | 0.000 |
| Anxiety score | 4.37 ± 2.44 | 3.43 ± 1.14 | 0.063 |
Accuracy (%) of facial expression category in the depression group and control group (n = 30).
| Variables | Happy | Neutral | Sad |
|---|---|---|---|
| Mean ± SD | Mean ± SD | Mean ± SD | |
| Depression group | 87.68 ± 7.50 | 82.87 ± 10.14 | 75.06 ± 13.32 |
| Control group | 95.97 ± 5.36 | 95.07 ± 5.48 | 78.77 ± 16.91 |
Response time of facial expression category between depression and control groups (n = 30).
| Variables | Happy | Neutral | Sad |
|---|---|---|---|
| Mean ± SD | Mean ± SD | Mean ± SD | |
| Depression group | 1106.11 ± 356.10 | 1210.02 ± 327.03 | 1293.92 ± 301.74 |
| Control group | 857.12 ± 117.66 | 1115.25 ± 652.12 | 1138.31 ± 363.12 |
Figure 2The wave of ERPs for facial expression category between the depressed (a) and control groups (b).
Figure 3The amplitude and latency of facial expression ((a) happy face, (b) neutral face, and (c) sad face) between the depressed and control groups.