Literature DB >> 34839119

Fine-grained spatial information extraction in radiology as two-turn question answering.

Surabhi Datta1, Kirk Roberts2.   

Abstract

OBJECTIVES: Radiology reports contain important clinical information that can be used to automatically construct fine-grained labels for applications requiring deep phenotyping. We propose a two-turn question answering (QA) method based on a transformer language model, BERT, for extracting detailed spatial information from radiology reports. We aim to demonstrate the advantage that a multi-turn QA framework provides over sequence-based methods for extracting fine-grained information.
METHODS: Our proposed method identifies spatial and descriptor information by answering queries given a radiology report text. We frame the extraction problem such that all the main radiology entities (e.g., finding, device, anatomy) and the spatial trigger terms (denoting the presence of a spatial relation between finding/device and anatomical location) are identified in the first turn. In the subsequent turn, various other contextual information that acts as important spatial roles with respect to a spatial trigger term are extracted along with identifying the spatial and other descriptor terms qualifying a radiological entity. The queries are constructed using separate templates for the two turns and we employ two query variations in the second turn.
RESULTS: When compared to the best-reported work on this task using a traditional sequence tagging method, the two-turn QA model exceeds its performance on every component. This includes promising improvements of 12, 13, and 12 points in the average F1 scores for identifying the spatial triggers, Figure, and Ground frame elements, respectively. DISCUSSION: Our experiments suggest that incorporating domain knowledge in the query (a general description about a frame element) helps in obtaining better results for some of the spatial and descriptive frame elements, especially in the case of the clinical pre-trained BERT model. We further highlight that the two-turn QA approach fits well for extracting information for complex schema where the objective is to identify all the frame elements linked to each spatial trigger and finding/device/anatomy entity, thereby enabling the extraction of more comprehensive information in the radiology domain.
CONCLUSION: Extracting fine-grained spatial information from text in the form of answering natural language queries holds potential in achieving better results when compared to more standard sequence labeling-based approaches.
Copyright © 2021 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Deep learning; Information extraction; Natural language processing; Question answering; Radiology report; Spatial information

Year:  2021        PMID: 34839119      PMCID: PMC9072592          DOI: 10.1016/j.ijmedinf.2021.104628

Source DB:  PubMed          Journal:  Int J Med Inform        ISSN: 1386-5056            Impact factor:   4.730


  15 in total

1.  RadLex: a new method for indexing online educational materials.

Authors:  Curtis P Langlotz
Journal:  Radiographics       Date:  2006 Nov-Dec       Impact factor: 5.333

2.  Enhancing clinical concept extraction with contextual embeddings.

Authors:  Yuqi Si; Jingqi Wang; Hua Xu; Kirk Roberts
Journal:  J Am Med Inform Assoc       Date:  2019-11-01       Impact factor: 4.497

3.  Biomedical named entity recognition using BERT in the machine reading comprehension framework.

Authors:  Cong Sun; Zhihao Yang; Lei Wang; Yin Zhang; Hongfei Lin; Jian Wang
Journal:  J Biomed Inform       Date:  2021-05-06       Impact factor: 6.317

4.  Extracting clinical terms from radiology reports with deep learning.

Authors:  Kento Sugimoto; Toshihiro Takeda; Jong-Hoon Oh; Shoya Wada; Shozo Konishi; Asuka Yamahata; Shiro Manabe; Noriyuki Tomiyama; Takashi Matsunaga; Katsuyuki Nakanishi; Yasushi Matsumura
Journal:  J Biomed Inform       Date:  2021-03-09       Impact factor: 6.317

5.  DeepLesion: automated mining of large-scale lesion annotations and universal lesion detection with deep learning.

Authors:  Ke Yan; Xiaosong Wang; Le Lu; Ronald M Summers
Journal:  J Med Imaging (Bellingham)       Date:  2018-07-20

6.  Extracting and Learning Fine-Grained Labels from Chest Radiographs.

Authors:  Tanveer Syeda-Mahmood; K C L Wong; Joy T Wu; Ashutosh Jadhav; Orest Boyko
Journal:  AMIA Annu Symp Proc       Date:  2021-01-25

7.  A Hybrid Deep Learning Approach for Spatial Trigger Extraction from Radiology Reports.

Authors:  Surabhi Datta; Kirk Roberts
Journal:  Proc Conf Empir Methods Nat Lang Process       Date:  2020-11

8.  Automated tracking of quantitative assessments of tumor burden in clinical trials.

Authors:  Daniel L Rubin; Debra Willrett; Martin J O'Connor; Cleber Hage; Camille Kurtz; Dilvan A Moreira
Journal:  Transl Oncol       Date:  2014-02-01       Impact factor: 4.243

9.  Automated Detection of Measurements and Their Descriptors in Radiology Reports Using a Hybrid Natural Language Processing Algorithm.

Authors:  Selen Bozkurt; Emel Alkim; Imon Banerjee; Daniel L Rubin
Journal:  J Digit Imaging       Date:  2019-08       Impact factor: 4.056

10.  Applying a deep learning-based sequence labeling approach to detect attributes of medical concepts in clinical text.

Authors:  Jun Xu; Zhiheng Li; Qiang Wei; Yonghui Wu; Yang Xiang; Hee-Jin Lee; Yaoyun Zhang; Stephen Wu; Hua Xu
Journal:  BMC Med Inform Decis Mak       Date:  2019-12-05       Impact factor: 2.796

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.