Guilherme Del Fiol1, Peter J Haug. 1. Biomedical Informatics Department, University of Utah, Salt Lake City, UT, USA. guilherme.delfiol@utah.edu
Abstract
OBJECTIVE: Infobuttons are decision support tools that offer links to information resources based on the context of the interaction between a clinician and an electronic medical record (EMR) system. The objective of this study was to explore machine learning and web usage mining methods to produce classification models for the prediction of information resources that might be relevant in a particular infobutton context. DESIGN: Classification models were developed and evaluated with an infobutton usage dataset. The performance of the models was measured and compared with a reference implementation in a series of experiments. MEASUREMENTS: Level of agreement (kappa) between the models and the resources that clinicians actually used in each infobutton session. RESULTS: The classification models performed significantly better than the reference implementation (p<.0001). The performance of these models tended to decrease over time, probably due to a phenomenon known as concept drift. However, the performance of the models remained stable when concept drift handling techniques were used. CONCLUSIONS: The results suggest that classification models are a promising method for the prediction of information resources that a clinician would use to answer patient care questions.
OBJECTIVE: Infobuttons are decision support tools that offer links to information resources based on the context of the interaction between a clinician and an electronic medical record (EMR) system. The objective of this study was to explore machine learning and web usage mining methods to produce classification models for the prediction of information resources that might be relevant in a particular infobutton context. DESIGN: Classification models were developed and evaluated with an infobutton usage dataset. The performance of the models was measured and compared with a reference implementation in a series of experiments. MEASUREMENTS: Level of agreement (kappa) between the models and the resources that clinicians actually used in each infobutton session. RESULTS: The classification models performed significantly better than the reference implementation (p<.0001). The performance of these models tended to decrease over time, probably due to a phenomenon known as concept drift. However, the performance of the models remained stable when concept drift handling techniques were used. CONCLUSIONS: The results suggest that classification models are a promising method for the prediction of information resources that a clinician would use to answer patient care questions.
Authors: Farah Magrabi; Enrico W Coiera; Johanna I Westbrook; A Sophie Gosling; Victor Vickland Journal: Int J Med Inform Date: 2005-01 Impact factor: 4.046
Authors: Per H Gesteland; Mandy A Allison; Catherine J Staes; Matthew H Samore; Michael A Rubin; Marjorie E Carter; Amyanne Wuthrich; Anita Y Kinney; Susan Mottice; Carrie L Byington Journal: AMIA Annu Symp Proc Date: 2008-11-06
Authors: Guilherme Del Fiol; Peter J Haug; James J Cimino; Scott P Narus; Chuck Norlin; Joyce A Mitchell Journal: J Am Med Inform Assoc Date: 2008-08-28 Impact factor: 4.497
Authors: James J Cimino; Casey L Overby; Emily B Devine; Nathan C Hulse; Xia Jing; Saverio M Maviglia; Guilherme Del Fiol Journal: AMIA Annu Symp Proc Date: 2013-11-16