Nancy E Kass1, Holly A Taylor2, Joseph Ali3, Kristina Hallez3, Lelia Chaisson4. 1. Johns Hopkins Berman Institute of Bioethics, Baltimore, MD, USA Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA nkass@jhu.edu. 2. Johns Hopkins Berman Institute of Bioethics, Baltimore, MD, USA Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA. 3. Johns Hopkins Berman Institute of Bioethics, Baltimore, MD, USA. 4. Division of Pulmonary and Critical Care Medicine, University of California San Francisco, San Francisco, CA, USA.
Abstract
BACKGROUND: Research suggests that participants do not always adequately understand studies. While some consent interventions increase understanding, methodologic challenges have been raised in studying consent outside of actual trial settings. This study examined the feasibility of testing two consent interventions in actual studies and measured effectiveness of interventions in improving understanding. METHODS: Participants enrolling in any of eight ongoing clinical trials were sequentially assigned to one of three different informed consent strategies for enrollment in their clinical trial. Control participants received standard consent procedures for their trial. Participants in the first intervention arm received a bulleted fact sheet summarizing key study information. Participants in the second intervention arm received the bulleted fact sheet and also engaged in a feedback Q&A session. Later, patients answered closed- and open-ended questions to assess patient understanding and literacy. Descriptive statistics, Wilcoxon -Mann -Whitney and Kruskal-Wallis tests were generated to assess correlations; regression analysis determined predictors of understanding. RESULTS: 144 participants enrolled. Using regression analysis, participants receiving the second intervention scored 7.6 percentage points higher (p = .02) on open-ended questions about understanding than participants in the control, although unadjusted comparisons did not reach statistical significance. CONCLUSIONS: Our study supports the hypothesis that patients receiving both bulleted fact sheets and a Q&A session had higher understanding compared to standard consent. Fact sheets and short structured dialog are quick to administer and easy to replicate across studies and should be tested in larger samples.
BACKGROUND: Research suggests that participants do not always adequately understand studies. While some consent interventions increase understanding, methodologic challenges have been raised in studying consent outside of actual trial settings. This study examined the feasibility of testing two consent interventions in actual studies and measured effectiveness of interventions in improving understanding. METHODS:Participants enrolling in any of eight ongoing clinical trials were sequentially assigned to one of three different informed consent strategies for enrollment in their clinical trial. Control participants received standard consent procedures for their trial. Participants in the first intervention arm received a bulleted fact sheet summarizing key study information. Participants in the second intervention arm received the bulleted fact sheet and also engaged in a feedback Q&A session. Later, patients answered closed- and open-ended questions to assess patient understanding and literacy. Descriptive statistics, Wilcoxon -Mann -Whitney and Kruskal-Wallis tests were generated to assess correlations; regression analysis determined predictors of understanding. RESULTS: 144 participants enrolled. Using regression analysis, participants receiving the second intervention scored 7.6 percentage points higher (p = .02) on open-ended questions about understanding than participants in the control, although unadjusted comparisons did not reach statistical significance. CONCLUSIONS: Our study supports the hypothesis that patients receiving both bulleted fact sheets and a Q&A session had higher understanding compared to standard consent. Fact sheets and short structured dialog are quick to administer and easy to replicate across studies and should be tested in larger samples.
Authors: Brianna Hoffner; Susan Bauer-Wu; Suzanne Hitchcock-Bryan; Mark Powell; Andrew Wolanski; Steven Joffe Journal: Cancer Date: 2011-08-25 Impact factor: 6.860
Authors: Stephanie A Kraft; Melissa Constantine; David Magnus; Kathryn M Porter; Sandra Soo-Jin Lee; Michael Green; Nancy E Kass; Benjamin S Wilfond; Mildred K Cho Journal: Clin Trials Date: 2016-09-23 Impact factor: 2.486
Authors: Bettina F Drake; Katherine M Brown; Sarah Gehlert; Leslie E Wolf; Joann Seo; Hannah Perkins; Melody S Goodman; Kimberly A Kaphingst Journal: J Cancer Educ Date: 2017-12 Impact factor: 2.037
Authors: Sangeetha Paramasivan; Philippa Davies; Alison Richards; Julia Wade; Leila Rooshenas; Nicola Mills; Alba Realpe; Jeffrey Pradeep Raj; Supriya Subramani; Jonathan Ives; Richard Huxtable; Jane M Blazeby; Jenny L Donovan Journal: BMJ Glob Health Date: 2021-05
Authors: Julia Wade; Daisy Elliott; Kerry N L Avery; Daisy Gaunt; Grace J Young; Rebecca Barnes; Sangeetha Paramasivan; W Bruce Campbell; Jane M Blazeby; Alison J Birtle; Rob C Stein; David J Beard; Alison W Halliday; Jenny L Donovan Journal: Trials Date: 2017-07-17 Impact factor: 2.279