| Literature DB >> 35983201 |
Yang Li1, Jia Ze Li2, Qi Fan3, Xin Li1, Zhihong Wang1.
Abstract
In order to better assess the mental health status, combining online text data and considering the problems of lexicon sparsity and small lexicon size in feature statistics of word frequency of the traditional linguistic inquiry and word count (LIWC) dictionary, and combining the advantages of constructive neural network (CNN) convolutional neural network in contextual semantic extraction, a CNN-based mental health assessment method is proposed and evaluated with the measurement indicators in CLPsych2017. The results showed that the results obtained from the mental health assessment by CNN were superior in all indicators, in which F1 = 0.51 and ACC = 0.69. Meanwhile, ACC evaluated by FastText, CNN, and CNN + Word2Vec were 0.66, 0.67, 0.67, and F1 were 0.37, 0.47, and 0.49, respectively, which indicates the use of CNN in mental health assessment has feasibility.Entities:
Keywords: CNN; LIWC dictionary; assessment; mental health assessment; psychological
Year: 2022 PMID: 35983201 PMCID: PMC9378835 DOI: 10.3389/fpsyg.2022.943146
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1Architecture of constructive neural network (CNN).
Top 10 speech classes with standard deviation of word frequency.
| Ranking | Word class | Standard deviation | Ranking | Word class | Standard deviation |
| 1 | Relig_c | 0.4269 | 6 | Money_c | 0.3304 |
| 2 | Friends_c | 0.3965 | 7 | Assent_c | 0.3237 |
| 3 | Death_c | 0.3867 | 8 | Anger_c | 0.2796 |
| 4 | You_c | 0.3658 | 9 | Ingest _c | 0.2823 |
| 5 | We_c | 0.3469 | 10 | Sexual _c | 0.2502 |
FIGURE 2The linguistic inquiry and word count (LIWC)-CNN model architecture.
Model hyper-parameter setting.
| Hyper-parameter | Description | Value |
| D | Word vector dimension | 128 |
| N | Learning rate | 0.001 |
| H | Convolution kernel sliding window size | 23,5,7 |
| Batch size | Batch size | 50 |
| P | Dropout rate | 0.1 |
| M | Number of convolution kernels | 48 |
| ∧ | Regularization coefficient | 0.1 |
FIGURE 3Influence of iterations on the model.
FIGURE 4Training accuracy at different batch sizes.
FIGURE 5Training accuracy at different Dropout rates.
Comparison of experimental results of various methods.
| Methods | Non-green F1 | Flagged | Urgent | All | |||
|
|
|
| |||||
| Fl | Acc | Fl | Acc | Fl | Acc | ||
| FastText | 0.21 | 0.82 | 0.89 | 0.00 | 0.00 | 0.37 | 0.66 |
| CNN | 0.34 | 0.79 | 0.73 | 0.50 | 0.36 | 0.47 | 0.67 |
| CNN + Word2Vec | 0.38 | 0.83 | 0.81 | 0.55 | 0.69 | 0.49 | 0.67 |
| LIWC-CNN | 0.40 | 0.77 | 0.85 | 0.59 | 0.76 | 0.51 | 0.69 |