BACKGROUND: We undertook research to improve the AGREE instrument, a tool used to evaluate guidelines. We tested a new seven-point scale, evaluated the usefulness of the original items in the instrument, investigated evidence to support shorter, tailored versions of the tool, and identified areas for improvement. METHOD: We report on one component of a larger study that used a mixed design with four factors (user type, clinical topic, guideline and condition). For the analysis reported in this article, we asked participants to read a guideline and use the AGREE items to evaluate it based on a seven-point scale, to complete three outcome measures related to adoption of the guideline, and to provide feedback on the instrument's usefulness and how to improve it. RESULTS: Guideline developers gave lower-quality ratings than did clinicians or policy-makers. Five of six domains were significant predictors of participants' outcome measures (p < 0.05). All domains and items were rated as useful by stakeholders (mean scores > 4.0) with no significant differences by user type (p > 0.05). Internal consistency ranged between 0.64 and 0.89. Inter-rater reliability was satisfactory. We received feedback on how to improve the instrument. INTERPRETATION: Quality ratings of the AGREE domains were significant predictors of outcome measures associated with guideline adoption: guideline endorsements, overall intentions to use guidelines, and overall quality of guidelines. All AGREE items were assessed as useful in determining whether a participant would use a guideline. No clusters of items were found more useful by some users than others. The measurement properties of the seven-point scale were promising. These data contributed to the refinements and release of the AGREE II.
BACKGROUND: We undertook research to improve the AGREE instrument, a tool used to evaluate guidelines. We tested a new seven-point scale, evaluated the usefulness of the original items in the instrument, investigated evidence to support shorter, tailored versions of the tool, and identified areas for improvement. METHOD: We report on one component of a larger study that used a mixed design with four factors (user type, clinical topic, guideline and condition). For the analysis reported in this article, we asked participants to read a guideline and use the AGREE items to evaluate it based on a seven-point scale, to complete three outcome measures related to adoption of the guideline, and to provide feedback on the instrument's usefulness and how to improve it. RESULTS: Guideline developers gave lower-quality ratings than did clinicians or policy-makers. Five of six domains were significant predictors of participants' outcome measures (p < 0.05). All domains and items were rated as useful by stakeholders (mean scores > 4.0) with no significant differences by user type (p > 0.05). Internal consistency ranged between 0.64 and 0.89. Inter-rater reliability was satisfactory. We received feedback on how to improve the instrument. INTERPRETATION: Quality ratings of the AGREE domains were significant predictors of outcome measures associated with guideline adoption: guideline endorsements, overall intentions to use guidelines, and overall quality of guidelines. All AGREE items were assessed as useful in determining whether a participant would use a guideline. No clusters of items were found more useful by some users than others. The measurement properties of the seven-point scale were promising. These data contributed to the refinements and release of the AGREE II.
Authors: Melissa C Brouwers; Michelle E Kho; George P Browman; Jako S Burgers; Francoise Cluzeau; Gene Feder; Béatrice Fervers; Ian D Graham; Jeremy Grimshaw; Steven E Hanna; Peter Littlejohns; Julie Makarski; Louise Zitzelsberger Journal: CMAJ Date: 2010-07-05 Impact factor: 8.262
Authors: Melissa C Brouwers; Ian D Graham; Steven E Hanna; David A Cameron; George P Browman Journal: Int J Technol Assess Health Care Date: 2004 Impact factor: 2.188
Authors: Melissa C Brouwers; Michelle E Kho; George P Browman; Jako S Burgers; Françoise Cluzeau; Gene Feder; Béatrice Fervers; Ian D Graham; Steven E Hanna; Julie Makarski Journal: CMAJ Date: 2010-05-31 Impact factor: 8.262
Authors: Martin P Eccles; Jill Francis; Robbie Foy; Marie Johnston; Claire Bamford; Jeremy M Grimshaw; Julian Hughes; Jan Lecouturier; Nick Steen; Paula M Whitty Journal: Int J Behav Med Date: 2009
Authors: Melissa C Brouwers; Michelle E Kho; George P Browman; Jako S Burgers; Francoise Cluzeau; Gene Feder; Béatrice Fervers; Ian D Graham; Jeremy Grimshaw; Steven E Hanna; Peter Littlejohns; Julie Makarski; Louise Zitzelsberger Journal: CMAJ Date: 2010-07-05 Impact factor: 8.262
Authors: David Liu; Erica Peterson; James Dooner; Mark Baerlocher; Leslie Zypchen; Joel Gagnon; Michael Delorme; Chad Kim Sing; Jason Wong; Randolph Guzman; Gavin Greenfield; Otto Moodley; Paul Yenson Journal: CMAJ Date: 2015-09-28 Impact factor: 8.262
Authors: Smita Bhatia; Saro H Armenian; Gregory T Armstrong; Eline van Dulmen-den Broeder; Michael M Hawkins; Leontien C M Kremer; Claudia E Kuehni; Jørgen H Olsen; Leslie L Robison; Melissa M Hudson Journal: J Clin Oncol Date: 2015-08-24 Impact factor: 44.544
Authors: Thomas Semlitsch; Wolfgang A Blank; Ina B Kopp; Ulrich Siering; Andrea Siebenhofer Journal: Dtsch Arztebl Int Date: 2015-07-06 Impact factor: 5.594
Authors: M Diane Lougheed; Catherine Lemiere; Francine M Ducharme; Chris Licskai; Sharon D Dell; Brian H Rowe; Mark Fitzgerald; Richard Leigh; Wade Watson; Louis-Philippe Boulet Journal: Can Respir J Date: 2012 Mar-Apr Impact factor: 2.409
Authors: D Howell; S Keller-Olaman; T K Oliver; T F Hack; L Broadfield; K Biggs; J Chung; D Gravelle; E Green; M Hamel; T Harth; P Johnston; D McLeod; N Swinton; A Syme; K Olson Journal: Curr Oncol Date: 2013-06 Impact factor: 3.677
Authors: Shoshana J Herzig; Susan L Calcaterra; Hilary J Mosher; Matthew V Ronan; Nicole Van Groningen; Lili Shek; Anthony Loffredo; Michelle Keller; Anupam B Jena; Teryl K Nuckols Journal: J Hosp Med Date: 2018-04 Impact factor: 2.960