| Literature DB >> 33581833 |
Katikapalli Subramanyam Kalyan1, Sivanesan Sangeetha2.
Abstract
In the last few years, people started to share lots of information related to health in the form of tweets, reviews and blog posts. All these user generated clinical texts can be mined to generate useful insights. However, automatic analysis of clinical text requires identification of standard medical concepts. Most of the existing deep learning based medical concept normalization systems are based on CNN or RNN. Performance of these models is limited as they have to be trained from scratch (except embeddings). In this work, we propose a medical concept normalization system based on BERT and highway layer. BERT, a pre-trained context sensitive deep language representation model advanced state-of-the-art performance in many NLP tasks and gating mechanism in highway layer helps the model to choose only important information. Experimental results show that our model outperformed all existing methods on two standard datasets. Further, we conduct a series of experiments to study the impact of different learning rates and batch sizes, noise and freezing encoder layers on our model.Entities:
Keywords: BERT; Clinical Natural Language Processing; Highway Network; Medical Concept Normalization
Year: 2021 PMID: 33581833 DOI: 10.1016/j.artmed.2021.102008
Source DB: PubMed Journal: Artif Intell Med ISSN: 0933-3657 Impact factor: 5.326