| Literature DB >> 29016825 |
Gaurav Trivedi1, Phuong Pham2, Wendy W Chapman3, Rebecca Hwa1,2, Janyce Wiebe1,2, Harry Hochheiser1,4.
Abstract
The gap between domain experts and natural language processing expertise is a barrier to extracting understanding from clinical text. We describe a prototype tool for interactive review and revision of natural language processing models of binary concepts extracted from clinical notes. We evaluated our prototype in a user study involving 9 physicians, who used our tool to build and revise models for 2 colonoscopy quality variables. We report changes in performance relative to the quantity of feedback. Using initial training sets as small as 10 documents, expert review led to final F1scores for the "appendiceal-orifice" variable between 0.78 and 0.91 (with improvements ranging from 13.26% to 29.90%). F1for "biopsy" ranged between 0.88 and 0.94 (-1.52% to 11.74% improvements). The average System Usability Scale score was 70.56. Subjective feedback also suggests possible design improvements.Keywords: electronic health records; machine learning; medical informatics; natural language processing (NLP); user-computer interface
Mesh:
Year: 2018 PMID: 29016825 PMCID: PMC6381768 DOI: 10.1093/jamia/ocx070
Source DB: PubMed Journal: J Am Med Inform Assoc ISSN: 1067-5027 Impact factor: 4.497