Stephen Bent1, Amy Padula, Andrew L Avins. 1. University of California and San Francisco Veterans Administration Medical Center, San Francisco, California 94121, USA. bent@itsa.ucsf.edu
Abstract
BACKGROUND: There is no standard method of identifying adverse events in clinical trials. OBJECTIVE: To determine whether 3 different methods of questioning patients about adverse events in a clinical trial affect the frequency of reported events. DESIGN: Randomized, single-blind, controlled trial. SETTING:A Veterans Administration medical center, San Francisco, California. PARTICIPANTS: Men 50 years of age or older who had benign prostatic hyperplasia. MEASUREMENT: Frequency of self-reported medical problems. INTERVENTION: The authors randomly assigned 214 men who were undergoing a 1-month, single-blind, placebo run-in period during an existing clinical trial to 3 groups to test different self-administered methods of assessing medical problems at the end of the run-in period. The first group was asked an open-ended question; the second group was asked an open-ended, defined question; and the third group was given a checklist of 53 common side effects. RESULTS:All 214 patients completed the study. Patients assigned to the checklist group reported a total of 238 adverse events; in comparison, patients who were asked an open-ended question or an open-ended, defined question reported 11 and 14 adverse events, respectively (P < 0.001). The percentage of patients reporting any adverse event was also much higher in the group assigned to the checklist (77%) than in the first group (14%) or second group (13%) (P < 0.001). LIMITATIONS: The study included only relatively healthy, well-educated, middle-aged men and assessed only self-reported medical problems after the participants had taken placebo for 1 month. All personnel overseeing the study were aware of the group assignments. CONCLUSIONS: Different methods of collecting patient data regarding adverse events lead to large differences in the reported rates of adverse events in clinical trials, potentially reducing the validity of comparisons between the side effect profiles of drugs and other interventions.
RCT Entities:
BACKGROUND: There is no standard method of identifying adverse events in clinical trials. OBJECTIVE: To determine whether 3 different methods of questioning patients about adverse events in a clinical trial affect the frequency of reported events. DESIGN: Randomized, single-blind, controlled trial. SETTING: A Veterans Administration medical center, San Francisco, California. PARTICIPANTS: Men 50 years of age or older who had benign prostatic hyperplasia. MEASUREMENT: Frequency of self-reported medical problems. INTERVENTION: The authors randomly assigned 214 men who were undergoing a 1-month, single-blind, placebo run-in period during an existing clinical trial to 3 groups to test different self-administered methods of assessing medical problems at the end of the run-in period. The first group was asked an open-ended question; the second group was asked an open-ended, defined question; and the third group was given a checklist of 53 common side effects. RESULTS: All 214 patients completed the study. Patients assigned to the checklist group reported a total of 238 adverse events; in comparison, patients who were asked an open-ended question or an open-ended, defined question reported 11 and 14 adverse events, respectively (P < 0.001). The percentage of patients reporting any adverse event was also much higher in the group assigned to the checklist (77%) than in the first group (14%) or second group (13%) (P < 0.001). LIMITATIONS: The study included only relatively healthy, well-educated, middle-aged men and assessed only self-reported medical problems after the participants had taken placebo for 1 month. All personnel overseeing the study were aware of the group assignments. CONCLUSIONS: Different methods of collecting patient data regarding adverse events lead to large differences in the reported rates of adverse events in clinical trials, potentially reducing the validity of comparisons between the side effect profiles of drugs and other interventions.
Authors: Qu Cui; Linda Robinson; Dawn Elston; Fiona Smaill; Jeffrey Cohen; Corinna Quan; Nancy McFarland; Lehana Thabane; Andrew McIvor; Johannes Zeidler; Marek Smieja Journal: AIDS Patient Care STDS Date: 2011-10-18 Impact factor: 5.078
Authors: Edward J Fox; Alberto Vasquez; William Grainger; Tina S Ma; Christian von Hehn; John Walsh; Jie Li; Javier Zambrano Journal: Int J MS Care Date: 2016 Jan-Feb
Authors: Berhan Ayele; Teshome Gebre; Jenafir I House; Zhaoxia Zhou; Charles E McCulloch; Travis C Porco; Bruce D Gaynor; Paul M Emerson; Thomas M Lietman; Jeremy D Keenan Journal: Am J Trop Med Hyg Date: 2011-08 Impact factor: 2.345
Authors: An-Wen Chan; Jennifer M Tetzlaff; Peter C Gøtzsche; Douglas G Altman; Howard Mann; Jesse A Berlin; Kay Dickersin; Asbjørn Hróbjartsson; Kenneth F Schulz; Wendy R Parulekar; Karmela Krleza-Jeric; Andreas Laupacis; David Moher Journal: BMJ Date: 2013-01-08