OBJECTIVE: Our purpose was to measure the agreement, reliability, construct validity, and feasibility of a measurement tool to assess systematic reviews (AMSTAR). STUDY DESIGN AND SETTING: We randomly selected 30 systematic reviews from a database. Each was assessed by two reviewers using: (1) the enhanced quality assessment questionnaire (Overview of Quality Assessment Questionnaire [OQAQ]); (2) Sacks' instrument; and (3) our newly developed measurement tool (AMSTAR). We report on reliability (interobserver kappas of the 11 AMSTAR items), intraclass correlation coefficients (ICCs) of the sum scores, construct validity (ICCs of the sum scores of AMSTAR compared with those of other instruments), and completion times. RESULTS: The interrater agreement of the individual items of AMSTAR was substantial with a mean kappa of 0.70 (95% confidence interval [CI]: 0.57, 0.83) (range: 0.38-1.0). Kappas recorded for the other instruments were 0.63 (95% CI: 0.38, 0.78) for enhanced OQAQ and 0.40 (95% CI: 0.29, 0.50) for the Sacks' instrument. The ICC of the total score for AMSTAR was 0.84 (95% CI: 0.65, 0.92) compared with 0.91 (95% CI: 0.82, 0.96) for OQAQ and 0.86 (95% CI: 0.71, 0.94) for the Sacks' instrument. AMSTAR proved easy to apply, each review taking about 15 minutes to complete. CONCLUSIONS: AMSTAR has good agreement, reliability, construct validity, and feasibility. These findings need confirmation by a broader range of assessors and a more diverse range of reviews.
OBJECTIVE: Our purpose was to measure the agreement, reliability, construct validity, and feasibility of a measurement tool to assess systematic reviews (AMSTAR). STUDY DESIGN AND SETTING: We randomly selected 30 systematic reviews from a database. Each was assessed by two reviewers using: (1) the enhanced quality assessment questionnaire (Overview of Quality Assessment Questionnaire [OQAQ]); (2) Sacks' instrument; and (3) our newly developed measurement tool (AMSTAR). We report on reliability (interobserver kappas of the 11 AMSTAR items), intraclass correlation coefficients (ICCs) of the sum scores, construct validity (ICCs of the sum scores of AMSTAR compared with those of other instruments), and completion times. RESULTS: The interrater agreement of the individual items of AMSTAR was substantial with a mean kappa of 0.70 (95% confidence interval [CI]: 0.57, 0.83) (range: 0.38-1.0). Kappas recorded for the other instruments were 0.63 (95% CI: 0.38, 0.78) for enhanced OQAQ and 0.40 (95% CI: 0.29, 0.50) for the Sacks' instrument. The ICC of the total score for AMSTAR was 0.84 (95% CI: 0.65, 0.92) compared with 0.91 (95% CI: 0.82, 0.96) for OQAQ and 0.86 (95% CI: 0.71, 0.94) for the Sacks' instrument. AMSTAR proved easy to apply, each review taking about 15 minutes to complete. CONCLUSIONS: AMSTAR has good agreement, reliability, construct validity, and feasibility. These findings need confirmation by a broader range of assessors and a more diverse range of reviews.
Authors: Davy Vancampfort; Joseph Firth; Christoph U Correll; Marco Solmi; Dan Siskind; Marc De Hert; Rebekah Carney; Ai Koyanagi; André F Carvalho; Fiona Gaughran; Brendon Stubbs Journal: World Psychiatry Date: 2019-02 Impact factor: 49.548
Authors: U W Preuss; E Gouzoulis-Mayfrank; U Havemann-Reinecke; I Schäfer; M Beutel; E Hoch; K F Mann Journal: Eur Arch Psychiatry Clin Neurosci Date: 2017-04-24 Impact factor: 5.270
Authors: Paul Baker; Carol Coole; Avril Drummond; Sayeed Khan; Catriona McDaid; Catherine Hewitt; Lucksy Kottam; Sarah Ronaldson; Elizabeth Coleman; David A McDonald; Fiona Nouri; Melanie Narayanasamy; Iain McNamara; Judith Fitch; Louise Thomson; Gerry Richardson; Amar Rangan Journal: Health Technol Assess Date: 2020-09 Impact factor: 4.014