| Literature DB >> 25239474 |
Steven D Targum1, J Cara Pendergrass2, Chelsea Toner2, Mahnaz Asgharnejad3, Daniel J Burch4.
Abstract
Signal detection requires ratings reliability throughout a clinical trial. The confirmation of site-based rater scores by a second, independent and blinded rater is a reasonable metric of ratings reliability. We used audio-digital pens to record site-based interviews of the Montgomery-Asberg Depression Rating Scale (MADRS) in a double-blind, placebo controlled trial of a novel antidepressant in treatment resistant depressed patients. Blinded, site-independent raters generated "dual" scores that revealed high correlations between site-based and site-independent raters (r=0.940 for all ratings) and high sensitivity, specificity, predictive values, and kappa coefficients for treatment response and non-response outcomes using the site-based rater scores as the standard. The blinded raters achieved an 89.4% overall accuracy and 0.786 kappa for matching the treatment response or non-response outcomes of the site-based raters. A limitation of this method is that independent ratings depend on the quality of site-based interviews and patient responses to the site-based interviewers. Nonetheless, this quality assurance strategy may have broad applicability for studies that use subjective measures and wherever ratings reliability is a concern. "Dual" scoring of recorded site-based ratings can be a relatively unobtrusive surveillance strategy to confirm scores and to identify and remediate rater "outliers" during a study.Entities:
Keywords: Audio-digital recording; Inter-rater reliability; Major Depressive Disorder; Predictive value; Ratings precision; Site-independent ratings
Mesh:
Substances:
Year: 2014 PMID: 25239474 DOI: 10.1016/j.euroneuro.2014.08.016
Source DB: PubMed Journal: Eur Neuropsychopharmacol ISSN: 0924-977X Impact factor: 4.600