Literature DB >> 17990503

A cognitive evaluation of four online search engines for answering definitional questions posed by physicians.

Hong Yu1, David Kaufman.   

Abstract

The Internet is having a profound impact on physicians' medical decision making. One recent survey of 277 physicians showed that 72% of physicians regularly used the Internet to research medical information and 51% admitted that information from web sites influenced their clinical decisions. This paper describes the first cognitive evaluation of four state-of-the-art Internet search engines: Google (i.e., Google and Scholar.Google), MedQA, Onelook, and PubMed for answering definitional questions (i.e., questions with the format of "What is X?") posed by physicians. Onelook is a portal for online definitions, and MedQA is a question answering system that automatically generates short texts to answer specific biomedical questions. Our evaluation criteria include quality of answer, ease of use, time spent, and number of actions taken. Our results show that MedQA outperforms Onelook and PubMed in most of the criteria, and that MedQA surpasses Google in time spent and number of actions, two important efficiency criteria. Our results show that Google is the best system for quality of answer and ease of use. We conclude that Google is an effective search engine for medical definitions, and that MedQA exceeds the other search engines in that it provides users direct answers to their questions; while the users of the other search engines have to visit several sites before finding all of the pertinent information.

Mesh:

Year:  2007        PMID: 17990503

Source DB:  PubMed          Journal:  Pac Symp Biocomput        ISSN: 2335-6928


  6 in total

1.  Automatically extracting information needs from complex clinical questions.

Authors:  Yong-gang Cao; James J Cimino; John Ely; Hong Yu
Journal:  J Biomed Inform       Date:  2010-07-27       Impact factor: 6.317

2.  AskHERMES: An online question answering system for complex clinical questions.

Authors:  YongGang Cao; Feifan Liu; Pippa Simpson; Lamont Antieau; Andrew Bennett; James J Cimino; John Ely; Hong Yu
Journal:  J Biomed Inform       Date:  2011-01-21       Impact factor: 6.317

3.  Using the weighted keyword model to improve information retrieval for answering biomedical questions.

Authors:  Hong Yu; Yong-Gang Cao
Journal:  Summit Transl Bioinform       Date:  2009-03-01

4.  A Natural Language Processing System That Links Medical Terms in Electronic Health Record Notes to Lay Definitions: System Development Using Physician Reviews.

Authors:  Jinying Chen; Emily Druhl; Balaji Polepalli Ramesh; Thomas K Houston; Cynthia A Brandt; Donna M Zulman; Varsha G Vimalananda; Samir Malkani; Hong Yu
Journal:  J Med Internet Res       Date:  2018-01-22       Impact factor: 5.428

5.  List-wise learning to rank biomedical question-answer pairs with deep ranking recursive autoencoders.

Authors:  Yan Yan; Bo-Wen Zhang; Xu-Feng Li; Zhenhan Liu
Journal:  PLoS One       Date:  2020-11-09       Impact factor: 3.240

6.  Construct validity and internal consistency reliability of the Loewenstein occupational therapy cognitive assessment (LOTCA).

Authors:  Fidaa Almomani; Tamara Avi-Itzhak; Naor Demeter; Naomi Josman; Murad O Al-Momani
Journal:  BMC Psychiatry       Date:  2018-06-11       Impact factor: 3.630

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.