BACKGROUND: There has been an increase in the number of systematic reviews of diagnostic tests, which has resulted in the introduction of two checklists: statement for reporting of diagnostic accuracy (STARD) and quality assessment of diagnostic accuracy studies (QUADAS). OBJECTIVE: To examine the validity and usefulness of QUADAS when applied to diagnostic accuracy studies using psychometric instruments and to examine the quality in reporting of these studies during practical application of the checklist. METHOD: Two reviewers independently rated the quality of 54 studies using QUADAS. The proportion of agreement was used to assess overall agreement and individual agreement of QUADAS items between reviewers. RESULTS: The overall agreement between the two reviewers for all QUADAS items combined was 85.7%. The proportion of agreement between reviewers for each item ranged from just over 57-100% and was over 80% for 8 of the items. The poorest agreement was associated with the items for selection criteria, indeterminate results and withdrawals. None of the studies adequately reported all relevant information to enable all QUADAS item to be scored as 'yes'. CONCLUSION: Overall QUADAS was relatively easy to use and appears to be an acceptable tool for appraising the quality of diagnostic accuracy studies using psychometric instruments. The application of QUADAS was hampered by the poor quality of reporting encountered.
BACKGROUND: There has been an increase in the number of systematic reviews of diagnostic tests, which has resulted in the introduction of two checklists: statement for reporting of diagnostic accuracy (STARD) and quality assessment of diagnostic accuracy studies (QUADAS). OBJECTIVE: To examine the validity and usefulness of QUADAS when applied to diagnostic accuracy studies using psychometric instruments and to examine the quality in reporting of these studies during practical application of the checklist. METHOD: Two reviewers independently rated the quality of 54 studies using QUADAS. The proportion of agreement was used to assess overall agreement and individual agreement of QUADAS items between reviewers. RESULTS: The overall agreement between the two reviewers for all QUADAS items combined was 85.7%. The proportion of agreement between reviewers for each item ranged from just over 57-100% and was over 80% for 8 of the items. The poorest agreement was associated with the items for selection criteria, indeterminate results and withdrawals. None of the studies adequately reported all relevant information to enable all QUADAS item to be scored as 'yes'. CONCLUSION: Overall QUADAS was relatively easy to use and appears to be an acceptable tool for appraising the quality of diagnostic accuracy studies using psychometric instruments. The application of QUADAS was hampered by the poor quality of reporting encountered.
Authors: Patrick M Bossuyt; Johannes B Reitsma; David E Bruns; Constantine A Gatsonis; Paul P Glasziou; Les M Irwig; David Moher; Drummond Rennie; Henrica C W de Vet; Jeroen G Lijmer Journal: Ann Intern Med Date: 2003-01-07 Impact factor: 25.391
Authors: Dominic T S Lee; Alexander S K Yip; Sandra S M Chan; Michelle H Y Tsui; W S Wong; Tony K H Chung Journal: Psychosom Med Date: 2003 May-Jun Impact factor: 4.312
Authors: William C Becker; Liana Fraenkel; E Jennifer Edelman; Stephen R Holt; Janis Glover; Robert D Kerns; David A Fiellin Journal: Pain Date: 2013-03-14 Impact factor: 6.961
Authors: Yong-Bo Hu; Chun-Bo Li; Ning Song; Yang Zou; Sheng-Di Chen; Ru-Jing Ren; Gang Wang Journal: Front Aging Neurosci Date: 2016-02-09 Impact factor: 5.750
Authors: Katharine Bosanquet; Della Bailey; Simon Gilbody; Melissa Harden; Laura Manea; Sarah Nutbrown; Dean McMillan Journal: BMJ Open Date: 2015-12-09 Impact factor: 2.692