Literature DB >> 32562898

Understanding spatial language in radiology: Representation framework, annotation, and spatial relation extraction from chest X-ray reports using deep learning.

Surabhi Datta1, Yuqi Si2, Laritza Rodriguez3, Sonya E Shooshan3, Dina Demner-Fushman3, Kirk Roberts4.   

Abstract

Radiology reports contain a radiologist's interpretations of images, and these images frequently describe spatial relations. Important radiographic findings are mostly described in reference to an anatomical location through spatial prepositions. Such spatial relationships are also linked to various differential diagnoses and often described through uncertainty phrases. Structured representation of this clinically significant spatial information has the potential to be used in a variety of downstream clinical informatics applications. Our focus is to extract these spatial representations from the reports. For this, we first define a representation framework based on the Spatial Role Labeling (SpRL) scheme, which we refer to as Rad-SpRL. In Rad-SpRL, common radiological entities tied to spatial relations are encoded through four spatial roles: Trajector, Landmark, Diagnosis, and Hedge, all identified in relation to a spatial preposition (or Spatial Indicator). We annotated a total of 2,000 chest X-ray reports following Rad-SpRL. We then propose a deep learning-based natural language processing (NLP) method involving word and character-level encodings to first extract the Spatial Indicators followed by identifying the corresponding spatial roles. Specifically, we use a bidirectional long short-term memory (Bi-LSTM) conditional random field (CRF) neural network as the baseline model. Additionally, we incorporate contextualized word representations from pre-trained language models (BERT and XLNet) for extracting the spatial information. We evaluate both gold and predicted Spatial Indicators to extract the four types of spatial roles. The results are promising, with the highest average F1 measure for Spatial Indicator extraction being 91.29 (XLNet); the highest average overall F1 measure considering all the four spatial roles being 92.9 using gold Indicators (XLNet); and 85.6 using predicted Indicators (BERT pre-trained on MIMIC notes). The corpus is available in Mendeley at http://dx.doi.org/10.17632/yhb26hfz8n.1 and https://github.com/krobertslab/datasets/blob/master/Rad-SpRL.xml.
Copyright © 2020 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Deep learning; NLP; Radiology report; Spatial relations

Mesh:

Year:  2020        PMID: 32562898      PMCID: PMC7807990          DOI: 10.1016/j.jbi.2020.103473

Source DB:  PubMed          Journal:  J Biomed Inform        ISSN: 1532-0464            Impact factor:   6.317


  29 in total

1.  RadLex: a new method for indexing online educational materials.

Authors:  Curtis P Langlotz
Journal:  Radiographics       Date:  2006 Nov-Dec       Impact factor: 5.333

2.  Spatial ability in radiologists: a necessary prerequisite?

Authors:  D Birchall
Journal:  Br J Radiol       Date:  2015-03-10       Impact factor: 3.039

3.  Automated Triaging of Adult Chest Radiographs with Deep Artificial Neural Networks.

Authors:  Mauro Annarumma; Samuel J Withey; Robert J Bakewell; Emanuele Pesce; Vicky Goh; Giovanni Montana
Journal:  Radiology       Date:  2019-01-22       Impact factor: 11.105

4.  Automatic Extraction and Post-coordination of Spatial Relations in Consumer Language.

Authors:  Kirk Roberts; Laritza Rodriguez; Sonya E Shooshan; Dina Demner-Fushman
Journal:  AMIA Annu Symp Proc       Date:  2015-11-05

5.  A general natural-language text processor for clinical radiology.

Authors:  C Friedman; P O Alderson; J H Austin; J J Cimino; S B Johnson
Journal:  J Am Med Inform Assoc       Date:  1994 Mar-Apr       Impact factor: 4.497

6.  Architectural requirements for a multipurpose natural language processor in the clinical environment.

Authors:  C Friedman; S B Johnson; B Forman; J Starren
Journal:  Proc Annu Symp Comput Appl Med Care       Date:  1995

7.  Preparing a collection of radiology examinations for distribution and retrieval.

Authors:  Dina Demner-Fushman; Marc D Kohli; Marc B Rosenman; Sonya E Shooshan; Laritza Rodriguez; Sameer Antani; George R Thoma; Clement J McDonald
Journal:  J Am Med Inform Assoc       Date:  2015-07-01       Impact factor: 4.497

8.  Tumor information extraction in radiology reports for hepatocellular carcinoma patients.

Authors:  Wen-Wai Yim; Tyler Denman; Sharon W Kwan; Meliha Yetisgen
Journal:  AMIA Jt Summits Transl Sci Proc       Date:  2016-07-20

9.  A neural joint model for entity and relation extraction from biomedical text.

Authors:  Fei Li; Meishan Zhang; Guohong Fu; Donghong Ji
Journal:  BMC Bioinformatics       Date:  2017-03-31       Impact factor: 3.169

10.  BioBERT: a pre-trained biomedical language representation model for biomedical text mining.

Authors:  Jinhyuk Lee; Wonjin Yoon; Sungdong Kim; Donghyeon Kim; Sunkyu Kim; Chan Ho So; Jaewoo Kang
Journal:  Bioinformatics       Date:  2020-02-15       Impact factor: 6.937

View more
  5 in total

1.  Identifying ARDS using the Hierarchical Attention Network with Sentence Objectives Framework.

Authors:  Kevin Lybarger; Linzee Mabrey; Matthew Thau; Pavan K Bhatraju; Mark Wurfel; Meliha Yetisgen
Journal:  AMIA Annu Symp Proc       Date:  2022-02-21

2.  Increasing Women's Knowledge about HPV Using BERT Text Summarization: An Online Randomized Study.

Authors:  Hind Bitar; Amal Babour; Fatema Nafa; Ohoud Alzamzami; Sarah Alismail
Journal:  Int J Environ Res Public Health       Date:  2022-07-01       Impact factor: 4.614

3.  A dataset of chest X-ray reports annotated with Spatial Role Labeling annotations.

Authors:  Surabhi Datta; Kirk Roberts
Journal:  Data Brief       Date:  2020-07-25

4.  Deep Learning-Based Natural Language Processing in Radiology: The Impact of Report Complexity, Disease Prevalence, Dataset Size, and Algorithm Type on Model Performance.

Authors:  A W Olthof; P M A van Ooijen; L J Cornelissen
Journal:  J Med Syst       Date:  2021-09-04       Impact factor: 4.460

5.  Event-Based Clinical Finding Extraction from Radiology Reports with Pre-trained Language Model.

Authors:  Wilson Lau; Kevin Lybarger; Martin L Gunn; Meliha Yetisgen
Journal:  J Digit Imaging       Date:  2022-10-17       Impact factor: 4.903

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.