Aaron M Cohen1, Kyle Ambert, Marian McDonagh. 1. Department of Medical Informatics, Clinical Epidemiology, School of Medicine, Oregon Health & Science University, 3181 S. W. Sam Jackson Park Road, Mail Code: BICC, Portland, OR 97239-3098, USA. cohenaa@ohsu.edu
Abstract
OBJECTIVE: Machine learning systems can be an aid to experts performing systematic reviews (SRs) by automatically ranking journal articles for work-prioritization. This work investigates whether a topic-specific automated document ranking system for SRs can be improved using a hybrid approach, combining topic-specific training data with data from other SR topics. DESIGN: A test collection was built using annotated reference files from 24 systematic drug class reviews. A support vector machine learning algorithm was evaluated with cross-validation, using seven different fractions of topic-specific training data in combination with samples from the other 23 topics. This approach was compared to both a baseline system, which used only topic-specific training data, and to a system using only the nontopic data sampled from the remaining topics. MEASUREMENTS: Mean area under the receiver-operating curve (AUC) was used as the measure of comparison. RESULTS: On average, the hybrid system improved mean AUC over the baseline system by 20%, when topic-specific training data were scarce. The system performed significantly better than the baseline system at all levels of topic-specific training data. In addition, the system performed better than the nontopic system at all but the two smallest fractions of topic specific training data, and no worse than the nontopic system with these smallest amounts of topic specific training data. CONCLUSIONS: Automated literature prioritization could be helpful in assisting experts to organize their time when performing systematic reviews. Future work will focus on extending the algorithm to use additional sources of topic-specific data, and on embedding the algorithm in an interactive system available to systematic reviewers during the literature review process.
OBJECTIVE: Machine learning systems can be an aid to experts performing systematic reviews (SRs) by automatically ranking journal articles for work-prioritization. This work investigates whether a topic-specific automated document ranking system for SRs can be improved using a hybrid approach, combining topic-specific training data with data from other SR topics. DESIGN: A test collection was built using annotated reference files from 24 systematic drug class reviews. A support vector machine learning algorithm was evaluated with cross-validation, using seven different fractions of topic-specific training data in combination with samples from the other 23 topics. This approach was compared to both a baseline system, which used only topic-specific training data, and to a system using only the nontopic data sampled from the remaining topics. MEASUREMENTS: Mean area under the receiver-operating curve (AUC) was used as the measure of comparison. RESULTS: On average, the hybrid system improved mean AUC over the baseline system by 20%, when topic-specific training data were scarce. The system performed significantly better than the baseline system at all levels of topic-specific training data. In addition, the system performed better than the nontopic system at all but the two smallest fractions of topic specific training data, and no worse than the nontopic system with these smallest amounts of topic specific training data. CONCLUSIONS: Automated literature prioritization could be helpful in assisting experts to organize their time when performing systematic reviews. Future work will focus on extending the algorithm to use additional sources of topic-specific data, and on embedding the algorithm in an interactive system available to systematic reviewers during the literature review process.
Authors: Kaveh G Shojania; Margaret Sampson; Mohammed T Ansari; Jun Ji; Steve Doucette; David Moher Journal: Ann Intern Med Date: 2007-07-16 Impact factor: 25.391
Authors: Halil Kilicoglu; Dina Demner-Fushman; Thomas C Rindflesch; Nancy L Wilczynski; R Brian Haynes Journal: J Am Med Inform Assoc Date: 2008-10-24 Impact factor: 4.497
Authors: Margaret Sampson; Kaveh G Shojania; Jessie McGowan; Raymond Daniel; Tamara Rader; Alla E Iansavichene; Jun Ji; Mohammed T Ansari; David Moher Journal: J Clin Epidemiol Date: 2008-02-14 Impact factor: 6.437
Authors: William R Hersh; Ravi Teja Bhupatiraju; Laura Ross; Phoebe Roberts; Aaron M Cohen; Dale F Kraemer Journal: J Biomed Discov Collab Date: 2006-03-13