Bethany Percha1,2, Kereeti Pisapati3,4,5, Cynthia Gao1, Hank Schmidt4,5. 1. Department of Medicine, Icahn School of Medicine at Mount Sinai, New York, New York, USA. 2. Department of Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, New York, USA. 3. Mount Sinai Innovation Partners, Mount Sinai Health System, New York, New York, USA. 4. Breast Surgical Oncology, Icahn School of Medicine at Mount Sinai, New York, New York, USA. 5. Tisch Cancer Institute, Icahn School of Medicine at Mount Sinai, New York, New York, USA.
Abstract
OBJECTIVE: Clinical registries-structured databases of demographic, diagnosis, and treatment information-play vital roles in retrospective studies, operational planning, and assessment of patient eligibility for research, including clinical trials. Registry curation, a manual and time-intensive process, is always costly and often impossible for rare or underfunded diseases. Our goal was to evaluate the feasibility of natural language inference (NLI) as a scalable solution for registry curation. MATERIALS AND METHODS: We applied five state-of-the-art, pretrained, deep learning-based NLI models to clinical, laboratory, and pathology notes to infer information about 43 different breast oncology registry fields. Model inferences were evaluated against a manually curated, 7439 patient breast oncology research database. RESULTS: NLI models showed considerable variation in performance, both within and across fields. One model, ALBERT, outperformed the others (BART, RoBERTa, XLNet, and ELECTRA) on 22 out of 43 fields. A detailed error analysis revealed that incorrect inferences primarily arose through models' tendency to misinterpret historical findings, as well as confusion based on abbreviations and subtle term variants common in clinical text. DISCUSSION AND CONCLUSION: Traditional natural language processing methods require specially annotated training sets or the construction of a separate model for each registry field. In contrast, a single pretrained NLI model can curate dozens of different fields simultaneously. Surprisingly, NLI methods remain unexplored in the clinical domain outside the realm of shared tasks and benchmarks. Modern NLI models could increase the efficiency of registry curation, even when applied "out of the box" with no additional training.
OBJECTIVE: Clinical registries-structured databases of demographic, diagnosis, and treatment information-play vital roles in retrospective studies, operational planning, and assessment of patient eligibility for research, including clinical trials. Registry curation, a manual and time-intensive process, is always costly and often impossible for rare or underfunded diseases. Our goal was to evaluate the feasibility of natural language inference (NLI) as a scalable solution for registry curation. MATERIALS AND METHODS: We applied five state-of-the-art, pretrained, deep learning-based NLI models to clinical, laboratory, and pathology notes to infer information about 43 different breast oncology registry fields. Model inferences were evaluated against a manually curated, 7439 patient breast oncology research database. RESULTS: NLI models showed considerable variation in performance, both within and across fields. One model, ALBERT, outperformed the others (BART, RoBERTa, XLNet, and ELECTRA) on 22 out of 43 fields. A detailed error analysis revealed that incorrect inferences primarily arose through models' tendency to misinterpret historical findings, as well as confusion based on abbreviations and subtle term variants common in clinical text. DISCUSSION AND CONCLUSION: Traditional natural language processing methods require specially annotated training sets or the construction of a separate model for each registry field. In contrast, a single pretrained NLI model can curate dozens of different fields simultaneously. Surprisingly, NLI methods remain unexplored in the clinical domain outside the realm of shared tasks and benchmarks. Modern NLI models could increase the efficiency of registry curation, even when applied "out of the box" with no additional training.
Keywords:
clinical research; electronic health records; entailment recognition; machine learning; natural language inference; natural language processing; text mining
Authors: Paul A Harris; Robert Taylor; Robert Thielke; Jonathon Payne; Nathaniel Gonzalez; Jose G Conde Journal: J Biomed Inform Date: 2008-09-30 Impact factor: 6.317
Authors: Graeme L Hickey; Stuart W Grant; Rebecca Cosgriff; Ioannis Dimarakis; Domenico Pagano; Arie P Kappetein; Ben Bridgewater Journal: Eur J Cardiothorac Surg Date: 2013-01-30 Impact factor: 4.191
Authors: Brett K Beaulieu-Jones; Samuel G Finlayson; William Yuan; Russ B Altman; Isaac S Kohane; Vinay Prasad; Kun-Hsing Yu Journal: Clin Pharmacol Ther Date: 2019-11-14 Impact factor: 6.875