G Hripcsak1, B Allen, J J Cimino, R Lee. 1. Department of Medical Informatics, Columbia-Presbyterian Medical Center, New York, NY, USA. hripcsak@columbia.edu
Abstract
OBJECTIVE: To evaluate the performance of tools for authoring patient database queries. DESIGN: Query by Review, a tool that exploits the training that users have undergone to master a result review system, was compared with AccessMed, a vocabulary browser that supports lexical matching and the traversal of hierarchical and semantic links. Seven subjects (Medical Logic Module authors) were asked to use both tools to gather the vocabulary terms necessary to perform each of eight laboratory queries. MEASUREMENTS: The proportion of queries that were correct; intersubject agreement. RESULTS: Query by Review had better performance than AccessMed (38% correct queries versus 18%, p = 0.002), but both figures were low. Poor intersubject agreement (28% for Query by Review and 21% for AccessMed) corroborated the relatively low performance. Subjects appeared to have trouble distinguishing laboratory tests from laboratory batteries, picking terms relevant to the particular data type required, and using classes in the vocabulary's hierarchy. CONCLUSION: Query by Review, with its more constrained user interface, performed somewhat better than AccessMed, a more general tool. Neither tool achieved adequate performance, however, which points to the difficulty of formulating a query for a clinical database and the need for further work.
OBJECTIVE: To evaluate the performance of tools for authoring patient database queries. DESIGN: Query by Review, a tool that exploits the training that users have undergone to master a result review system, was compared with AccessMed, a vocabulary browser that supports lexical matching and the traversal of hierarchical and semantic links. Seven subjects (Medical Logic Module authors) were asked to use both tools to gather the vocabulary terms necessary to perform each of eight laboratory queries. MEASUREMENTS: The proportion of queries that were correct; intersubject agreement. RESULTS: Query by Review had better performance than AccessMed (38% correct queries versus 18%, p = 0.002), but both figures were low. Poor intersubject agreement (28% for Query by Review and 21% for AccessMed) corroborated the relatively low performance. Subjects appeared to have trouble distinguishing laboratory tests from laboratory batteries, picking terms relevant to the particular data type required, and using classes in the vocabulary's hierarchy. CONCLUSION: Query by Review, with its more constrained user interface, performed somewhat better than AccessMed, a more general tool. Neither tool achieved adequate performance, however, which points to the difficulty of formulating a query for a clinical database and the need for further work.
Authors: A Gouveia-Oliveira; N C Salgado; A P Azevedo; L Lopes; V D Raposo; I Almeida; F G de Melo Journal: Methods Inf Med Date: 1994-12 Impact factor: 2.176
Authors: S Sengupta; P D Clayton; P Molholt; R V Sideli; J J Cimino; G Hripcsak; S B Johnson; B Allen; M McCormack; C Hill Journal: Int J Biomed Comput Date: 1994-01
Authors: C Safran; D Porter; J Lightfoot; C D Rury; L H Underhill; H L Bleich; W V Slack Journal: Ann Intern Med Date: 1989-11-01 Impact factor: 25.391
Authors: Qing T Zeng; Jonathan Crowell; Robert M Plovnick; Eunjung Kim; Long Ngo; Emily Dibble Journal: J Am Med Inform Assoc Date: 2005-10-12 Impact factor: 4.497
Authors: Gregory W Hruby; Mary Regina Boland; James J Cimino; Junfeng Gao; Adam B Wilcox; Julia Hirschberg; Chunhua Weng Journal: AMIA Jt Summits Transl Sci Proc Date: 2013-03-18