Stefanie Bühn1, Peggy Ober2, Tim Mathes3, Uta Wegewitz4, Anja Jacobs5, Dawid Pieper3. 1. Institute for Research in Operative Medicine, Faculty of Health - School of Medicine, Witten/Herdecke University, Ostmerheimer Str. 200, Building 38, 51109, Cologne, Germany. Stefanie.Buehn@uni-wh.de. 2. LIFE Child, LIFE Leipzig Research Center for Civilization Diseases, Leipzig University, Ph.-Rosenthal-Str. 27, 04103, Leipzig, Germany. 3. Institute for Research in Operative Medicine, Faculty of Health - School of Medicine, Witten/Herdecke University, Ostmerheimer Str. 200, Building 38, 51109, Cologne, Germany. 4. Federal Institute for Occupational Safety and Health (BAuA), Nöldnerstr. 40-42, 10317, Berlin, Germany. 5. Federal Joint Committee (Healthcare), Gutenbergstraße 13, 10587, Berlin, Germany.
Abstract
BACKGROUND: Systematic Reviews (SRs) can build the groundwork for evidence-based health care decision-making. A sound methodological quality of SRs is crucial. AMSTAR (A Measurement Tool to Assess Systematic Reviews) is a widely used tool developed to assess the methodological quality of SRs of randomized controlled trials (RCTs). Research shows that AMSTAR seems to be valid and reliable in terms of interrater reliability (IRR), but the test retest reliability (TRR) of AMSTAR has never been investigated. In our study we investigated the TRR of AMSTAR to evaluate the importance of its measurement and contribute to the discussion of the measurement properties of AMSTAR and other quality assessment tools. METHODS: Seven raters at three institutions independently assessed the methodological quality of SRs in the field of occupational health with AMSTAR. Between the first and second ratings was a timespan of approximately two years. Answers were dichotomized, and we calculated the TRR of all raters and AMSTAR items using Gwet's AC1 coefficient. To investigate the impact of variation in the ratings over time, we obtained summary scores for each review. RESULTS: AMSTAR item 4 (Was the status of publication used as an inclusion criterion?) provided the lowest median TRR of 0.53 (moderate agreement). Perfect agreement of all reviewers was detected for AMSTAR-item 1 with a Gwet's AC1 of 1, which represented perfect agreement. The median TRR of the single raters varied between 0.69 (substantial agreement) and 0.89 (almost perfect agreement). Variation of two or more points in yes-scored AMSTAR items was observed in 65% (73/112) of all assessments. CONCLUSIONS: The high variation between the first and second AMSTAR ratings suggests that consideration of the TRR is important when evaluating the psychometric properties of AMSTAR.. However, more evidence is needed to investigate this neglected issue of measurement properties. Our results may initiate discussion of the importance of considering the TRR of assessment tools. A further examination of the TRR of AMSTAR, as well as other recently established rating tools such as AMSTAR 2 and ROBIS (Risk Of Bias In Systematic reviews), would be useful.
BACKGROUND: Systematic Reviews (SRs) can build the groundwork for evidence-based health care decision-making. A sound methodological quality of SRs is crucial. AMSTAR (A Measurement Tool to Assess Systematic Reviews) is a widely used tool developed to assess the methodological quality of SRs of randomized controlled trials (RCTs). Research shows that AMSTAR seems to be valid and reliable in terms of interrater reliability (IRR), but the test retest reliability (TRR) of AMSTAR has never been investigated. In our study we investigated the TRR of AMSTAR to evaluate the importance of its measurement and contribute to the discussion of the measurement properties of AMSTAR and other quality assessment tools. METHODS: Seven raters at three institutions independently assessed the methodological quality of SRs in the field of occupational health with AMSTAR. Between the first and second ratings was a timespan of approximately two years. Answers were dichotomized, and we calculated the TRR of all raters and AMSTAR items using Gwet's AC1 coefficient. To investigate the impact of variation in the ratings over time, we obtained summary scores for each review. RESULTS:AMSTAR item 4 (Was the status of publication used as an inclusion criterion?) provided the lowest median TRR of 0.53 (moderate agreement). Perfect agreement of all reviewers was detected for AMSTAR-item 1 with a Gwet's AC1 of 1, which represented perfect agreement. The median TRR of the single raters varied between 0.69 (substantial agreement) and 0.89 (almost perfect agreement). Variation of two or more points in yes-scored AMSTAR items was observed in 65% (73/112) of all assessments. CONCLUSIONS: The high variation between the first and second AMSTAR ratings suggests that consideration of the TRR is important when evaluating the psychometric properties of AMSTAR.. However, more evidence is needed to investigate this neglected issue of measurement properties. Our results may initiate discussion of the importance of considering the TRR of assessment tools. A further examination of the TRR of AMSTAR, as well as other recently established rating tools such as AMSTAR 2 and ROBIS (Risk Of Bias In Systematic reviews), would be useful.
Authors: Caroline B Terwee; Sandra D M Bot; Michael R de Boer; Daniëlle A W M van der Windt; Dirk L Knol; Joost Dekker; Lex M Bouter; Henrica C W de Vet Journal: J Clin Epidemiol Date: 2006-08-24 Impact factor: 6.437
Authors: Megan N Houston; Kathryn L Van Pelt; Christopher D'Lauro; Rachel M Brodeur; Darren E Campbell; Gerald T McGinty; Jonathan C Jackson; Tim F Kelly; Karen Y Peck; Steven J Svoboda; Thomas W McAllister; Michael A McCrea; Steven P Broglio; Kenneth L Cameron Journal: J Int Neuropsychol Soc Date: 2020-06-16 Impact factor: 2.892
Authors: Beverley J Shea; Barnaby C Reeves; George Wells; Micere Thuku; Candyce Hamel; Julian Moran; David Moher; Peter Tugwell; Vivian Welch; Elizabeth Kristjansson; David A Henry Journal: BMJ Date: 2017-09-21
Authors: Cecilia A C Prinsen; Sunita Vohra; Michael R Rose; Susanne King-Jones; Sana Ishaque; Zafira Bhaloo; Denise Adams; Caroline B Terwee Journal: Trials Date: 2014-06-25 Impact factor: 2.279