| Literature DB >> 35983151 |
Shufeng Xiong1, Vishwash Batra2, Liangliang Liu1, Lei Xi1, Changxia Sun1.
Abstract
Personal medication intake detection aims to automatically detect tweets that show clear evidence of personal medication consumption. It is a research topic that has attracted considerable attention to drug safety surveillance. This task is inevitably dependent on medical domain information, and the current main model for this task does not explicitly consider domain information. To tackle this problem, we propose a domain attention mechanism for recurrent neural networks, LSTMs, with a multi-level feature representation of Twitter data. Specifically, we utilize character-level CNN to capture morphological features at the word level. Subsequently, we feed them with word embeddings into a BiLSTM to get the hidden representation of a tweet. An attention mechanism is introduced over the hidden state of the BiLSTM to attend to special medical information. Finally, a classification is performed on the weighted hidden representation of tweets. Experiments over a publicly available benchmark dataset show that our model can exploit a domain attention mechanism to consider medical information to improve performance. For example, our approach achieves a precision score of 0.708, a recall score of 0.694, and a F1 score of 0.697, which is significantly outperforming multiple strong and relevant baselines.Entities:
Mesh:
Year: 2022 PMID: 35983151 PMCID: PMC9381240 DOI: 10.1155/2022/5467262
Source DB: PubMed Journal: Comput Intell Neurosci
Figure 1Framework of our model.
Statistics of the dataset.
| Class 1 | Class 2 | Class 3 | Total | |
|---|---|---|---|---|
| Train | 1648 | 2650 | 4246 | 8544 |
| Test | 1444 | 2267 | 2593 | 6304 |
Comparison results.
| Model | Precision | Recall |
|
|---|---|---|---|
| NB | 0.675 | 0.631 | 0.650 |
| SVM | 0.679 | 0.664 | 0.668 |
| BiLSTM | 0.683 | 0.672 | 0.678 |
| CharCNN | 0.681 | 0.697 | 0.689 |
| AttRNN | 0.704 | 0.677 | 0.688 |
| InfyNLP | 0.725# | 0.664# | 0.693# |
| UKNLP | 0.701# | 0.677# | 0.689# |
| NRC-Canada | 0.704# | 0.635# | 0.668# |
| Our model | 0.708 | 0.694 | 0.697 |
Comparison results with different settings.
| Model | Precision | Recall |
|
|---|---|---|---|
| w/o CLM and DAC | 0.683 | 0.672 | 0.678 |
| w/o CLM | 0.701 | 0.683 | 0.693 |
| w/o DAC | 0.669 | 0.713 | 0.691 |
| Full model | 0.708 | 0.694 | 0.697 |