| Literature DB >> 33711545 |
Kento Sugimoto1, Toshihiro Takeda2, Jong-Hoon Oh3, Shoya Wada4, Shozo Konishi2, Asuka Yamahata2, Shiro Manabe2, Noriyuki Tomiyama5, Takashi Matsunaga6, Katsuyuki Nakanishi7, Yasushi Matsumura2.
Abstract
Extracting clinical terms from free-text format radiology reports is a first important step toward their secondary use. However, there is no general consensus on the kind of terms to be extracted. In this paper, we propose an information model comprising three types of clinical entities: observations, clinical findings, and modifiers. Furthermore, to determine its applicability for in-house radiology reports, we extracted clinical terms with state-of-the-art deep learning models and compared the results. We trained and evaluated models using 540 in-house chest computed tomography (CT) reports annotated by multiple medical experts. Two deep learning models were compared, and the effect of pre-training was explored. To investigate the generalizability of the model, we evaluated the use of other institutional chest CT reports. The micro F1-score of our best performance model using in-house and external datasets were 95.36% and 94.62%, respectively. Our results indicated that entities defined in our information model were suitable for extracting clinical terms from radiology reports, and the model was sufficiently generalizable to be used with dataset from other institutions.Keywords: Deep Learning; Information Extraction; Natural Language Processing; Radiology Report
Year: 2021 PMID: 33711545 DOI: 10.1016/j.jbi.2021.103729
Source DB: PubMed Journal: J Biomed Inform ISSN: 1532-0464 Impact factor: 6.317