OBJECTIVES: To assess inter-rater reliability and validity of the Newcastle Ottawa Scale (NOS) used for methodological quality assessment of cohort studies included in systematic reviews. STUDY DESIGN AND SETTING: Two reviewers independently applied the NOS to 131 cohort studies included in eight meta-analyses. Inter-rater reliability was calculated using kappa (κ) statistics. To assess validity, within each meta-analysis, we generated a ratio of pooled estimates for each quality domain. Using a random-effects model, the ratios of odds ratios for each meta-analysis were combined to give an overall estimate of differences in effect estimates. RESULTS: Inter-rater reliability varied from substantial for length of follow-up (κ = 0.68, 95% confidence interval [CI] = 0.47, 0.89) to poor for selection of the nonexposed cohort and demonstration that the outcome was not present at the outset of the study (κ = -0.03, 95% CI = -0.06, 0.00; κ = -0.06, 95% CI = -0.20, 0.07). Reliability for overall score was fair (κ = 0.29, 95% CI = 0.10, 0.47). In general, reviewers found the tool difficult to use and the decision rules vague even with additional information provided as part of this study. We found no association between individual items or overall score and effect estimates. CONCLUSION: Variable agreement and lack of evidence that the NOS can identify studies with biased results underscore the need for revisions and more detailed guidance for systematic reviewers using the NOS.
OBJECTIVES: To assess inter-rater reliability and validity of the Newcastle Ottawa Scale (NOS) used for methodological quality assessment of cohort studies included in systematic reviews. STUDY DESIGN AND SETTING: Two reviewers independently applied the NOS to 131 cohort studies included in eight meta-analyses. Inter-rater reliability was calculated using kappa (κ) statistics. To assess validity, within each meta-analysis, we generated a ratio of pooled estimates for each quality domain. Using a random-effects model, the ratios of odds ratios for each meta-analysis were combined to give an overall estimate of differences in effect estimates. RESULTS: Inter-rater reliability varied from substantial for length of follow-up (κ = 0.68, 95% confidence interval [CI] = 0.47, 0.89) to poor for selection of the nonexposed cohort and demonstration that the outcome was not present at the outset of the study (κ = -0.03, 95% CI = -0.06, 0.00; κ = -0.06, 95% CI = -0.20, 0.07). Reliability for overall score was fair (κ = 0.29, 95% CI = 0.10, 0.47). In general, reviewers found the tool difficult to use and the decision rules vague even with additional information provided as part of this study. We found no association between individual items or overall score and effect estimates. CONCLUSION: Variable agreement and lack of evidence that the NOS can identify studies with biased results underscore the need for revisions and more detailed guidance for systematic reviewers using the NOS.
Authors: Peter J Goebell; Ashish M Kamat; Richard J Sylvester; Peter Black; Michael Droller; Guilherme Godoy; M'Liss A Hudson; Kerstin Junker; Wassim Kassouf; Margaret A Knowles; Wolfgang A Schulz; Roland Seiler; Bernd J Schmitz-Dräger Journal: Urol Oncol Date: 2014-08-20 Impact factor: 3.498
Authors: Harris A Eyre; Tracy Air; Alyssa Pradhan; James Johnston; Helen Lavretsky; Michael J Stuart; Bernhard T Baune Journal: Prog Neuropsychopharmacol Biol Psychiatry Date: 2016-02-20 Impact factor: 5.067
Authors: Lukas Schwingshackl; Sven Knüppel; Carolina Schwedhelm; Georg Hoffmann; Benjamin Missbach; Marta Stelmach-Mardas; Stefan Dietrich; Fabian Eichelmann; Evangelos Kontopantelis; Khalid Iqbal; Krasimira Aleksandrova; Stefan Lorkowski; Michael F Leitzmann; Anja Kroke; Heiner Boeing Journal: Adv Nutr Date: 2016-11-15 Impact factor: 8.701
Authors: Bruno Tirotti Saragiotto; Tiê Parma Yamato; Luiz Carlos Hespanhol Junior; Michael J Rainbow; Irene S Davis; Alexandre Dias Lopes Journal: Sports Med Date: 2014-08 Impact factor: 11.136