Literature DB >> 34791282

Natural language inference for curation of structured clinical registries from unstructured text.

Bethany Percha1,2, Kereeti Pisapati3,4,5, Cynthia Gao1, Hank Schmidt4,5.   

Abstract

OBJECTIVE: Clinical registries-structured databases of demographic, diagnosis, and treatment information-play vital roles in retrospective studies, operational planning, and assessment of patient eligibility for research, including clinical trials. Registry curation, a manual and time-intensive process, is always costly and often impossible for rare or underfunded diseases. Our goal was to evaluate the feasibility of natural language inference (NLI) as a scalable solution for registry curation.
MATERIALS AND METHODS: We applied five state-of-the-art, pretrained, deep learning-based NLI models to clinical, laboratory, and pathology notes to infer information about 43 different breast oncology registry fields. Model inferences were evaluated against a manually curated, 7439 patient breast oncology research database.
RESULTS: NLI models showed considerable variation in performance, both within and across fields. One model, ALBERT, outperformed the others (BART, RoBERTa, XLNet, and ELECTRA) on 22 out of 43 fields. A detailed error analysis revealed that incorrect inferences primarily arose through models' tendency to misinterpret historical findings, as well as confusion based on abbreviations and subtle term variants common in clinical text. DISCUSSION AND
CONCLUSION: Traditional natural language processing methods require specially annotated training sets or the construction of a separate model for each registry field. In contrast, a single pretrained NLI model can curate dozens of different fields simultaneously. Surprisingly, NLI methods remain unexplored in the clinical domain outside the realm of shared tasks and benchmarks. Modern NLI models could increase the efficiency of registry curation, even when applied "out of the box" with no additional training.
© The Author(s) 2021. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For permissions, please email: journals.permissions@oup.com.

Entities:  

Keywords:  clinical research; electronic health records; entailment recognition; machine learning; natural language inference; natural language processing; text mining

Mesh:

Year:  2021        PMID: 34791282      PMCID: PMC8714278          DOI: 10.1093/jamia/ocab243

Source DB:  PubMed          Journal:  J Am Med Inform Assoc        ISSN: 1067-5027            Impact factor:   4.497


  6 in total

1.  Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support.

Authors:  Paul A Harris; Robert Taylor; Robert Thielke; Jonathon Payne; Nathaniel Gonzalez; Jose G Conde
Journal:  J Biomed Inform       Date:  2008-09-30       Impact factor: 6.317

Review 2.  Evaluation of data quality in the cancer registry: principles and methods. Part I: comparability, validity and timeliness.

Authors:  Freddie Bray; D Max Parkin
Journal:  Eur J Cancer       Date:  2008-12-29       Impact factor: 9.162

Review 3.  Clinical registries: governance, management, analysis and applications.

Authors:  Graeme L Hickey; Stuart W Grant; Rebecca Cosgriff; Ioannis Dimarakis; Domenico Pagano; Arie P Kappetein; Ben Bridgewater
Journal:  Eur J Cardiothorac Surg       Date:  2013-01-30       Impact factor: 4.191

4.  Comparison of MetaMap and cTAKES for entity extraction in clinical notes.

Authors:  Ruth Reátegui; Sylvie Ratté
Journal:  BMC Med Inform Decis Mak       Date:  2018-09-14       Impact factor: 2.796

Review 5.  Examining the Use of Real-World Evidence in the Regulatory Process.

Authors:  Brett K Beaulieu-Jones; Samuel G Finlayson; William Yuan; Russ B Altman; Isaac S Kohane; Vinay Prasad; Kun-Hsing Yu
Journal:  Clin Pharmacol Ther       Date:  2019-11-14       Impact factor: 6.875

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.