Literature DB >> 33711543

Explainable automated coding of clinical notes using hierarchical label-wise attention networks and label embedding initialisation.

Hang Dong1, Víctor Suárez-Paniagua2, William Whiteley3, Honghan Wu4.   

Abstract

BACKGROUND: Diagnostic or procedural coding of clinical notes aims to derive a coded summary of disease-related information about patients. Such coding is usually done manually in hospitals but could potentially be automated to improve the efficiency and accuracy of medical coding. Recent studies on deep learning for automated medical coding achieved promising performances. However, the explainability of these models is usually poor, preventing them to be used confidently in supporting clinical practice. Another limitation is that these models mostly assume independence among labels, ignoring the complex correlations among medical codes which can potentially be exploited to improve the performance.
METHODS: To address the issues of model explainability and label correlations, we propose a Hierarchical Label-wise Attention Network (HLAN), which aimed to interpret the model by quantifying importance (as attention weights) of words and sentences related to each of the labels. Secondly, we propose to enhance the major deep learning models with a label embedding (LE) initialisation approach, which learns a dense, continuous vector representation and then injects the representation into the final layers and the label-wise attention layers in the models. We evaluated the methods using three settings on the MIMIC-III discharge summaries: full codes, top-50 codes, and the UK NHS (National Health Service) COVID-19 (Coronavirus disease 2019) shielding codes. Experiments were conducted to compare the HLAN model and label embedding initialisation to the state-of-the-art neural network based methods, including variants of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs).
RESULTS: HLAN achieved the best Micro-level AUC and F1 on the top-50 code prediction, 91.9% and 64.1%, respectively; and comparable results on the NHS COVID-19 shielding code prediction to other models: around 97% Micro-level AUC. More importantly, in the analysis of model explanations, by highlighting the most salient words and sentences for each label, HLAN showed more meaningful and comprehensive model interpretation compared to the CNN-based models and its downgraded baselines, HAN and HA-GRU. Label embedding (LE) initialisation significantly boosted the previous state-of-the-art model, CNN with attention mechanisms, on the full code prediction to 52.5% Micro-level F1. The analysis of the layers initialised with label embeddings further explains the effect of this initialisation approach. The source code of the implementation and the results are openly available at https://github.com/acadTags/Explainable-Automated-Medical-Coding.
CONCLUSION: We draw the conclusion from the evaluation results and analyses. First, with hierarchical label-wise attention mechanisms, HLAN can provide better or comparable results for automated coding to the state-of-the-art, CNN-based models. Second, HLAN can provide more comprehensive explanations for each label by highlighting key words and sentences in the discharge summaries, compared to the n-grams in the CNN-based models and the downgraded baselines, HAN and HA-GRU. Third, the performance of deep learning based multi-label classification for automated coding can be consistently boosted by initialising label embeddings that captures the correlations among labels. We further discuss the advantages and drawbacks of the overall method regarding its potential to be deployed to a hospital and suggest areas for future studies.
Copyright © 2021 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Attention Mechanisms; Automated medical coding; Deep learning; Explainability; Label correlation; Multi-label classification; Natural Language Processing

Mesh:

Year:  2021        PMID: 33711543     DOI: 10.1016/j.jbi.2021.103728

Source DB:  PubMed          Journal:  J Biomed Inform        ISSN: 1532-0464            Impact factor:   6.317


  2 in total

1.  Multi-label classification for biomedical literature: an overview of the BioCreative VII LitCovid Track for COVID-19 literature topic annotations.

Authors:  Qingyu Chen; Alexis Allot; Robert Leaman; Rezarta Islamaj; Jingcheng Du; Li Fang; Kai Wang; Shuo Xu; Yuefu Zhang; Parsa Bagherzadeh; Sabine Bergler; Aakash Bhatnagar; Nidhir Bhavsar; Yung-Chun Chang; Sheng-Jie Lin; Wentai Tang; Hongtong Zhang; Ilija Tavchioski; Senja Pollak; Shubo Tian; Jinfeng Zhang; Yulia Otmakhova; Antonio Jimeno Yepes; Hang Dong; Honghan Wu; Richard Dufour; Yanis Labrak; Niladri Chatterjee; Kushagri Tandon; Fréjus A A Laleye; Loïc Rakotoson; Emmanuele Chersoni; Jinghang Gu; Annemarie Friedrich; Subhash Chandra Pujari; Mariia Chizhikova; Naveen Sivadasan; Saipradeep Vg; Zhiyong Lu
Journal:  Database (Oxford)       Date:  2022-08-31       Impact factor: 4.462

2.  A systematic review of natural language processing applied to radiology reports.

Authors:  Arlene Casey; Emma Davidson; Michael Poon; Hang Dong; Daniel Duma; Andreas Grivas; Claire Grover; Víctor Suárez-Paniagua; Richard Tobin; William Whiteley; Honghan Wu; Beatrice Alex
Journal:  BMC Med Inform Decis Mak       Date:  2021-06-03       Impact factor: 2.796

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.