Literature DB >> 7760579

Is consensus reproducible? A study of an algorithmic guidelines development process.

S D Pearson1, C Z Margolis, S Davis, L K Schreier, H N Sokol, L K Gottlieb.   

Abstract

The authors evaluated the reproducibility of a clinical algorithm consensus development process across three different physician panels at a health maintenance organization. Physician groups were composed of primary care internists, who were provided with identical selections from the medical literature and first-draft "seed" algorithms on the management of two common clinical problems: acute sinusitis and dyspepsia. Each panel used nominal group process and a modified Delphi method to create final algorithm drafts. To compare the clinical logic in the final algorithms, the authors applied a new qualitative and quantitative comparison method, the Clinical Algorithm Patient Abstraction (CAPA). Dyspepsia algorithms from all physician groups recommended empiric anti-acid therapy for most patients, favored endoscopy over barium swallow, and had very similar indications for endoscopy. The average CAPA comparison score among final physician algorithms was 6.1 on a scale of 0 (different) to 10 (identical). Sinusitis algorithms from all groups proposed empiric antibiotic therapy for most patients. Indications for sinus radiographs were similar between two algorithms (CAPA = 4.9), but differed significantly in the third, resulting in lower CAPA scores (average CAPA = 1.9, P < 0.03). The clinical similarity of the algorithms produced by these physician panels suggests a high level of reproducibility in this consensus-driven algorithm development process. However, the difference among the sinusitis algorithms suggests that physician consensus groups using a consensus process that a health maintenance organization can do with limited resources will produce some guidelines that vary due to differences in interpretation of evidence and physician experience.

Entities:  

Mesh:

Year:  1995        PMID: 7760579

Source DB:  PubMed          Journal:  Med Care        ISSN: 0025-7079            Impact factor:   2.983


  4 in total

1.  How valid are utilization review tools in assessing appropriate use of acute care beds?

Authors:  N Kalant; M Berlinguet; J G Diodati; L Dragatakis; F Marcotte
Journal:  CMAJ       Date:  2000-06-27       Impact factor: 8.262

2.  The determination of relevant goals and criteria used to select an automated patient care information system: a Delphi approach.

Authors:  J K Chocholik; S E Bouchard; J K Tan; D N Ostrow
Journal:  J Am Med Inform Assoc       Date:  1999 May-Jun       Impact factor: 4.497

Review 3.  The use of consensus methods and expert panels in pharmacoeconomic studies. Practical applications and methodological shortcomings.

Authors:  C Evans
Journal:  Pharmacoeconomics       Date:  1997-08       Impact factor: 4.981

4.  The First AO Classification System for Fractures of the Craniomaxillofacial Skeleton: Rationale, Methodological Background, Developmental Process, and Objectives.

Authors:  Laurent Audigé; Carl-Peter Cornelius; Antonio Di Ieva; Joachim Prein
Journal:  Craniomaxillofac Trauma Reconstr       Date:  2014-12
  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.