| Literature DB >> 35469205 |
Xin Li1, Lianting Hu2,3, Peixin Lu1, Tianhui Huang1, Wei Yang4, Quan Lu1, Huiying Liang2,3, Long Lu1,5.
Abstract
Text classification is widely studied by researchers in the natural language processing field. However, real-world text data often follow a long-tailed distribution as the frequency of each class is typically different. The performance of current mainstream learning algorithms in text classification suffers when the training data are highly imbalanced. The problem can get worse when the categories with fewer data are severely undersampled to the extent that the variation within each category is not fully captured by the given data. At present, there are a few studies on long-tailed text classification which put forward effective solutions. Encouraged by the progress of handling long-tailed data in the field of image, we try to integrate effective ideas into the field of long-tailed text classification and prove the effectiveness. In this paper, we come up with a novel approach of feature space reconstruction with the help of three-way decisions (3WDs) for long-tailed text classification. In detail, we verify the rationality of using a 3WD model for feature selection in long-tailed text data classification, propose a new feature space reconstruction method for long-tailed text data for the first time, and demonstrate how to effectively generate new samples for tail classes in reconstructed feature space. By adding new samples, we enrich the representing information of tail classes, to improve the classification results of long-tailed text classification. After some comparative experiments, we have verified that our model is an effective strategy to improve the performance of long-tailed text classification.Entities:
Mesh:
Year: 2022 PMID: 35469205 PMCID: PMC9034946 DOI: 10.1155/2022/3183469
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1The main steps based on machine learning and deep learning models in text classification.
Figure 2The proposed model architecture in our paper.
Meanings of TP, TN, FP, and FN.
| Sample | Result in | Not a result in |
|---|---|---|
| Belongs to | TP( | FP( |
| Does not belong to | FN( | TN( |
The numbers of the long-tailed text dataset used in our paper.
| No. | Class | Class label | Samples |
|---|---|---|---|
| 1 | Head | C34-Economy | 3201 |
| 2 | Head | C39-Sports | 2507 |
| 3 | Head | C38-Politics | 1854 |
| 4 | Head | C3-Art | 1282 |
| 5 | Tail | C11-Space | 582 |
| 6 | Tail | C32-Agriculture | 348 |
| 7 | Tail | C37-Military | 150 |
| 8 | Tail | C19-Computer | 120 |
| 9 | Tail | C36-Medical | 104 |
| 10 | Tail | C35-Law | 103 |
| 11 | Tail | C31-Environment | 89 |
| 12 | Tail | C29-Transport | 67 |
| 13 | Tail | C23-Mine | 67 |
| 14 | Tail | C15-Energy | 65 |
| 15 | Tail | C17-Communication | 55 |
| 16 | Tail | C16-Electronics | 32 |
Figure 3(a) The distribution of our long-tailed text dataset; (b) the distribution of features.
The performance results of 3WD models and baseline methods for feature selection.
| Method | Accuracy | Macroprecision | Macrorecall | Macro-F1 score |
|---|---|---|---|---|
| Word frequency | 0.56 | 0.487 | 0.518 | 0.51 |
| Chi2 | 0.70 | 0.577 | 0.591 | 0.58 |
| TF-IDF | 0.79 | 0.638 | 0.627 | 0.631 |
| 3WD (our paper) | 0.87 | 0.697 | 0.637 | 0.656 |
Figure 4The performance results of our proposed model with different generation degrees.
The performance results of our proposed model and baseline methods.
| Method | Datatype | Accuracy | Macroprecision | Macrorecall | Macro-F1 score |
|---|---|---|---|---|---|
| TF-IDF | All data | 0.79 | 0.65 | 0.635 | 0.64 |
| Head class | 0.872 | 0.854 | 0.857 | 0.857 | |
| Tail class | 0.635 | 0.597 | 0.581 | 0.585 | |
|
| |||||
| CNN | All data | 0.915 | 0.765 | 0.732 | 0.747 |
| Head class | 0.943 | 0.937 | 0.928 | 0.933 | |
| Tail class | 0.747 | 0.731 | 0.692 | 0.709 | |
|
| |||||
| RNN | All data | 0.905 | 0.783 | 0.767 | 0.775 |
| Head class | 0.932 | 0.927 | 0.916 | 0.918 | |
| Tail class | 0.782 | 0.759 | 0.728 | 0.742 | |
|
| |||||
| BI-LSTM | All data | 0.927 | 0.82 | 0.796 | 0.81 |
| Head class | 0.954 |
| 0.94 | 0.943 | |
| Tail class | 0.821 | 0.778 | 0.747 | 0.752 | |
|
| |||||
| Our method | All data |
|
|
|
|
| Head class | 0.953 | 0.945 |
|
| |
| Tail class |
|
|
|
| |
Bold values means the best values in accuracy, macroprecision, macrorecall, and macro-F1 score.
Figure 5The performance results of our proposed model and deep learning methods.