Literature DB >> 24766858

Research participation effects: a skeleton in the methodological cupboard.

Jim McCambridge1, Kypros Kypri2, Diana Elbourne3.   

Abstract

OBJECTIVE: There have been concerns about impacts of various aspects of taking part in research studies for a century. The concerns have not, however, been sufficiently well conceptualized to form traditions of study capable of defining and elaborating the nature of these problems. In this article we present a new way of thinking about a set of issues attracting long-standing attention. STUDY DESIGN AND
SETTING: We briefly review existing concepts and empirical work on well-known biases in surveys and cohort studies and propose that they are connected.
RESULTS: We offer the construct of "research participation effects" (RPE) as a vehicle for advancing multi-disciplinary understanding of biases. Empirical studies are needed to identify conditions in which RPE may be sufficiently large to warrant modifications of study design, analytic methods, or interpretation. We consider the value of adopting a more participant-centred view of the research process as a way of thinking about these issues, which may also have benefits in relation to research methodology more broadly.
CONCLUSION: Researchers may too readily overlook the extent to which research studies are unusual contexts, and that people may react in unexpected ways to what we invite them to do, introducing a range of biases.
Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Bias; Cohort studies; Hawthorne effect; Mixed methods; Research assessment; Research methods; Research participation; Surveys

Mesh:

Year:  2014        PMID: 24766858      PMCID: PMC4236591          DOI: 10.1016/j.jclinepi.2014.03.002

Source DB:  PubMed          Journal:  J Clin Epidemiol        ISSN: 0895-4356            Impact factor:   6.437


“Research participation effects” offer a new way of thinking about poorly understood sources of bias in surveys and cohort studies, and also in trials. Research studies are unusual contexts, and people may react in unexpected ways to what we invite them to do. Adopting the perspective of the participant suggests that existing well-known sources of bias may be connected to each other. Mixed methods participant-centred research may lead to better prevention of bias. The construct of “research participation effects” (RPE) has been proposed to better guide the empirical investigations of issues previously conceptualized as the Hawthorne effect [1]. We have also elaborated overlooked implications for behavioral intervention trials, identifying mechanisms by which bias may be introduced which randomization does not prevent [2]. This discussion considers the wider implications of RPE for thinking about bias, particularly addressing existing thinking about bias in surveys and cohort studies. New ways of understanding biases provide platforms for important advances in research design and methods. For example, Solomon [3] identified that the discovery of “pre-test sensitisation”, whereby measuring individual psychology or behavior at one point of time biased later measurement of the same characteristics, led to the introduction of control groups within behavioral sciences. Chalmers [4] identified allocation concealment to prevent selection bias as the primary motivation for the use of randomization in the original streptomycin trial. Chalmers [4] has suggested that addressing biases resulting from patient preferences may provide the next historical milestone in the development of trials methodology. Just as patients may prefer allocation to one arm of a clinical trial over another, people may react to whatever it is they are requested to do in the context of research. These reactions have the potential to affect study outcomes in ways that undermine the validity of inferences the research was designed to permit. A few years after the Hawthorne effect made its debut in the scientific literature [5], the concept of “demand characteristics” was introduced to psychology [6]. This referred to the ways in which study participants responded to their perceptions of the implicit preferences of researchers, tailoring their responses so as to be good subjects. Like the Hawthorne effect, although being well known, this construct has contributed disappointingly little to the methodological literature [7]. The unintended effects of research assessments have received attention other than when conceptualized as the Hawthorne effect. Randomized evaluation studies often show small effects, though there are inconsistencies [8], [9], [10], [11], [12]. Change due to having been assessed, having views about the desirability of different possible research requirements, and deliberately or unwittingly trying to satisfy researchers, are all consequences of research participation. The interaction of the research participant with the research process is discernible as a common thread running through these examples. The consequences of research participation may vary in strength across study designs, participants, topic areas, and the contexts in which research is done, and according to more specific features of the studies themselves.

Well-established biases in surveys and cohort studies

Ensuring adequate response rates, that is securing participation itself, is widely established as a key issue in survey design [13]. Evidence has accumulated over decades on how to do this [14], and in a context of falling response rates there has been extensive research on the implications of non-response for the estimation of prevalence and other parameters of interest in general household surveys [13]. There has also been much study of reporting errors made by participants in surveys, which draws attention to the sensitivity of the particular behavior or issue being enquired about [15]. This literature also distinguishes between task-related errors that are technical products of survey design, and motivated responses, for example, in the form of self-deception and impression management [16]. Thus in surveys, biases associated with research participation apply both to the decision to take part and to the accuracy of information provided. These biases may be conceptualized in many ways and often are thought about differently across disciplines and over time [17]. In a prospective cohort or longitudinal study [18], repeated data collection permits consequences of research participation to manifest themselves in altered behavior, cognitions, or emotions [12]. As Solomon [3] described, it is possible for inferences about data collected at one time point to be biased simply because of earlier data collection. This complication is more likely to occur, and is more likely to be problematic, in certain circumstances (see below). Some outcomes cannot be influenced by reactivity to evaluation, for example, where data collection is unobtrusive [19]. Asking someone how often they ride a bicycle may increase cycling in some circumstances and not others. It can only do so if the causal pathway to this outcome involves behavior that can be modified by this procedure [20]. For example, if a study participant owns a bicycle and is asked about their cycling behavior or views about cycling in a cohort study of health and lifestyle, they might think further about cycling, and might cycle more frequently as a result. This would artificially inflate levels of cycling in the cohort. If the study participant does not have access to a bicycle, this is less likely to occur unless they first acquire the means to start cycling. Asking about cycling in a different context may also reduce the likelihood of this occurring. The psychological processes involved are not important here; the point is that the more such effects occur, the more they may undermine the objectives of the study by introducing bias. This problem may not emanate only from the content of data collection. Participants may have read the consent form carefully and thought about their health and lifestyle before deciding whether or not to take part. A cohort study is thus vulnerable to both the possible reporting and participation problems previously described for cross-sectional surveys, at both study entry and at follow-up. Additionally, actual change in the behavior being investigated may have been induced. Change in the object of the evaluation influenced by any aspect of research participation entails bias, regardless of how it has been produced. This is so unless an assumption is made that such influences do not vary in time with repeated measurements, which is unlikely to be very often a safe assumption. Randomized controlled trials are cohort studies with randomization, and as such are vulnerable both to the previously described problems, and also to additional ones associated with randomization [2]. This implies problems in making valid inferences from research data that afflict all study designs. These problems are mostly, but not all, very well known. What is novel about this presentation is the suggestion that they are linked, and by extension that conceptualizing them in this way as RPE may lead to better understanding of methodological problems.

A research participant-centred perspective

Different types of studies make different requests of, and place different demands on, their participants. There is nonetheless a core sequence of early events involving both a recruitment and baseline assessment phase, as presented in Fig. 1 for a typical individually randomized trial. We have found this a useful vehicle for thinking through the potential for RPE. For those who continue to participate over time, our lack of attention to the possible impact of the research process might imply that it is inert [12] and perhaps also that participants are somehow passive in this sequence. Fig. 1 provides a brief description of what we usually do to or with the people who become our research participants and in which order. It offers no information on participant characteristics or how or why they may matter to RPE. We suggest there is a prima facie case that reasons for participation, severity of problems or views about the issue being investigated, susceptibility to social desirability or monitoring effects, and readiness for change can all have a bearing on whether any of this process will impact on participants. These intrapersonal features might be expected to engage dynamically in the interpersonal process through which research participation is enacted. Research questions might address any of these targets for study.
Fig. 1

The research process. Cross-sectional surveys end with baseline assessment, cohort studies also involve follow-up assessment(s) only, RCTs involve randomization to study conditions as described previously. RCT, randomized controlled trial.

The research process. Cross-sectional surveys end with baseline assessment, cohort studies also involve follow-up assessment(s) only, RCTs involve randomization to study conditions as described previously. RCT, randomized controlled trial. Adopting a more participant-centred view of the research process [21] might first consider the nature of the decision-making involved in taking part in research [22]. Altruism has long been considered as the primary reason why people take part in most types of research [23]. Being disinterested in implications for self would appear to make RPE less likely, perhaps unless the research provides an unexpected stimulus for more personal introspection. More recent thinking has pointed toward more qualified versions of altruism, termed weak [24] or conditional [25] altruism, whereby a process of evaluation of the implications for oneself accompanies the motivation to help others in making decisions to take part in research. Such conditionality may be more likely in some circumstances than others. Trials and other intervention studies probably also attract those seeking interventions who are less altruistically minded for understandable reasons. Such a spectrum of reasons for participation may have implications for the generation of RPE, with less altruistic reasons more likely to generate RPE. There is little literature on participant reasons for continuation in research studies over time [26] and it may be profitable to pay attention to other influences on ongoing participation in cohort studies and trials [27]. Qualitative studies should be useful in identifying targets for study. There are studies available on many of the aspects of the research process already described, including for example how much prospective participants read and engage with provided study information [28], [29]. Study of preferences (see [30]) is another area where qualitative methods have uncovered problems within the largely quantitative endeavor that is randomized controlled trials. Preferences for allocation in trials have not only been found to exist, but also to be quite dynamic over time and capable of being influenced by dedicated interventions [31]. There are not studies, however, which evaluate individual participant-level qualitative data and also explore the possible implications for bias at the quantitative study level [27]. This is probably because there has not been an explicit effort to apply the type of conceptualization suggested here, which links qualitative and quantitative and individual and study level data. Beyond investigations of the acceptability of research procedures to prospective participants, there has been no programmatic approach to studying the effects of apparently mundane aspects of taking part in research. We offer an example that demonstrates that it is not difficult to do these types of studies and for participants to discuss their engagement with the research process; a qualitative study showing how thwarted preferences for allocation to a novel intervention led to disappointment and subsequently to movements both toward and away from change in a weight-loss trial [32]. This situation is perhaps not dissimilar to the 30-year tradition of study of participant cognitive engagement with surveys, where much quantitative and qualitative data have been used to enhance the content of particular surveys, but have yielded disappointing progress in methodology for questionnaire design [33]. Our perspective suggests that unrecognized potential for bias resides in routine research practice. We acknowledge that this calls for a type of mixed methods orientation [34] in which the core concepts and issues are framed as done in quantitative research, and that a qualitative phenomenological approach is used to identify possible problems, which may in turn be further evaluated in quantitative studies. What might be described as a post-positivist concern for bias adopted here may be unsatisfactory to some qualitative researchers who have epistemological differences with such an approach [35]. This may also be unfamiliar territory for many readers of an epidemiology journal, which we suggest is useful to explore for new insights into the nature of biases. In Box 1 we offer some suggestions for helpful questions to ask in a given study, and for developing this type of research more widely. Why are participants taking part in this study? What does taking part in this study mean for participants? Why do participants behave as they do in this study? How does what participants do affect any concerns about bias? How far are the most likely sources of bias connected in this study? Is existing thinking about bias adequate for the methodological problems faced here? How might existing thinking about bias be extended to address methodological problems not well covered? What can qualitative data or quantitative data contribute to better understand these issues? How can qualitative data and quantitative data be combined to address research participation effects? How can the construct of research participation effects be developed to guide more advanced study?

Conclusion

The potential for RPE may be intrinsic to all human research designs, though there are probably many areas where it can be safely ignored, as unlikely to threaten valid inference. There are other domains of research where they certainly cannot be ignored. The problem is that we do not know where this is the case, and therefore further conceptual work and empirical studies elaborating these issues are needed. We suggest that conventionally understood forms of bias as found in cross-sectional surveys and cohort studies are also interpretable as RPE. Furthermore, this preliminary conceptualization may be fruitful for creative thinking about biases and how to minimize them in designing research studies. RPE is unwittingly created in the decisions made by researchers. Paying attention to the practices of researchers and approaching research on the research enterprise more sociologically [36] will also be useful. Because of their origins in the decisions made by researchers, RPE may be amenable to control in design, or in analysis if it is not possible to prevent them. Although we have known something of RPE for around 100 years [3], it will be disappointing if future progress is as slow as in the past. Perhaps this is partly because they call attention to unresolved and difficult-to-resolve issues to do with the relationship between quantitative and qualitative research approaches and data. RPE is nonetheless a skeleton in the methodological cupboard that deserves a decent burial.
  25 in total

1.  Forging convictions: the effects of active participation in a clinical trial.

Authors:  Clare Scott; Jan Walker; Peter White; George Lewith
Journal:  Soc Sci Med       Date:  2011-05-18       Impact factor: 4.634

Review 2.  Are we missing anything? Pursuing research on attrition.

Authors:  Lenora Marcellus
Journal:  Can J Nurs Res       Date:  2004-09

Review 3.  Patient preferences in randomised controlled trials: conceptual framework and implications for research.

Authors:  Peter Bower; Michael King; Irwin Nazareth; Fiona Lampe; Bonnie Sibbald
Journal:  Soc Sci Med       Date:  2005-02-17       Impact factor: 4.634

4.  Social desirability biases in self-reported alcohol consumption and harms.

Authors:  Christopher G Davis; Jennifer Thake; Natalie Vilhena
Journal:  Addict Behav       Date:  2009-11-10       Impact factor: 3.913

5.  Development of a complex intervention improved randomization and informed consent in a randomized controlled trial.

Authors:  Jenny L Donovan; J Athene Lane; Tim J Peters; Lucy Brindle; Elizabeth Salter; David Gillatt; Philip Powell; Prasad Bollina; David E Neal; Freddie C Hamdy
Journal:  J Clin Epidemiol       Date:  2008-07-10       Impact factor: 6.437

Review 6.  The effects of demand characteristics on research participant behaviours in non-laboratory settings: a systematic review.

Authors:  Jim McCambridge; Marijn de Bruin; John Witton
Journal:  PLoS One       Date:  2012-06-19       Impact factor: 3.240

Review 7.  Can research assessments themselves cause bias in behaviour change trials? A systematic review of evidence from solomon 4-group studies.

Authors:  Jim McCambridge; Kaanan Butor-Bhavsar; John Witton; Diana Elbourne
Journal:  PLoS One       Date:  2011-10-19       Impact factor: 3.240

Review 8.  Can simply answering research questions change behaviour? Systematic review and meta analyses of brief alcohol intervention trials.

Authors:  Jim McCambridge; Kypros Kypri
Journal:  PLoS One       Date:  2011-10-05       Impact factor: 3.240

9.  In randomization we trust? There are overlooked problems in experimenting with people in behavioral intervention trials.

Authors:  Jim McCambridge; Kypros Kypri; Diana Elbourne
Journal:  J Clin Epidemiol       Date:  2013-12-04       Impact factor: 6.437

Review 10.  Methods to increase response to postal and electronic questionnaires.

Authors:  Philip James Edwards; Ian Roberts; Mike J Clarke; Carolyn Diguiseppi; Reinhard Wentz; Irene Kwan; Rachel Cooper; Lambert M Felix; Sarah Pratap
Journal:  Cochrane Database Syst Rev       Date:  2009-07-08
View more
  48 in total

1.  Analysis of threats to research validity introduced by audio recording clinic visits: Selection bias, Hawthorne effect, both, or neither?

Authors:  Stephen G Henry; Anthony Jerant; Ana-Maria Iosif; Mitchell D Feldman; Camille Cipri; Richard L Kravitz
Journal:  Patient Educ Couns       Date:  2015-03-17

2.  Protocol design for large-scale cross-sectional studies of sexual abuse and associated factors in individual sports: feasibility study in Swedish athletics.

Authors:  Toomas Timpka; Staffan Janson; Jenny Jacobsson; Joakim Ekberg; Örjan Dahlström; Jan Kowalski; Victor Bargoria; Margo Mountjoy; Carl G Svedin
Journal:  J Sports Sci Med       Date:  2015-03-01       Impact factor: 2.988

Review 3.  Understanding the Hawthorne effect in wound research-A scoping review.

Authors:  Van Nb Nguyen; Charne Miller; Janine Sunderland; William McGuiness
Journal:  Int Wound J       Date:  2018-08-22       Impact factor: 3.315

4.  PNF 2.0? Initial evidence that gamification can increase the efficacy of brief, web-based personalized normative feedback alcohol interventions.

Authors:  Sarah C Boyle; Andrew M Earle; Joseph W LaBrie; Daniel J Smith
Journal:  Addict Behav       Date:  2016-12-02       Impact factor: 3.913

5.  Exploring Patient Engagement: A Qualitative Analysis of Low-Income Urban Participants in Asthma Research.

Authors:  Amy Korwin; Heather Black; Luzmercy Perez; Knashawn H Morales; Heather Klusaritz; Xiaoyan Han; Jingru Huang; Marisa Rogers; Grace Ndicu; Andrea J Apter
Journal:  J Allergy Clin Immunol Pract       Date:  2017-05-10

6.  Making the most of video recorded clinical encounters: Optimizing impact and productivity through interdisciplinary teamwork.

Authors:  Stephen G Henry; Anne Elizabeth Clark White; Elizabeth M Magnan; Eve Angeline Hood-Medland; Melissa Gosdin; Richard L Kravitz; Peter Joseph Torres; Jennifer Gerwing
Journal:  Patient Educ Couns       Date:  2020-06-03

7.  Substance use and HIV-risk behaviors among HIV-positive men who have sex with men in China: repeated measures in a cohort study design.

Authors:  Chen Zhang; Yu Liu; Xiaoyun Sun; Juan Wang; Hong-Yan Lu; Xiong He; Heng Zhang; Yu-Hua Ruan; Yiming Shao; Sten H Vermund; Han-Zhu Qian
Journal:  AIDS Care       Date:  2016-11-10

8.  The effectiveness of brief alcohol interventions delivered by community pharmacists: randomized controlled trial.

Authors:  Ranjita Dhital; Ian Norman; Cate Whittlesea; Trevor Murrells; Jim McCambridge
Journal:  Addiction       Date:  2015-07-14       Impact factor: 6.526

9.  The Impact of Asking About Interest in Free Nicotine Patches on Smoker's Stated Intent to Change: Real Effect or Artefact of Question Ordering?

Authors:  John A Cunningham; Vladyslav Kushnir; Jim McCambridge
Journal:  Nicotine Tob Res       Date:  2015-08-09       Impact factor: 4.244

10.  A Population-Level, Randomized Effectiveness Trial of Recruitment Strategies for Parenting Programs in Elementary Schools.

Authors:  Michelle Abraczinskas; Emily B Winslow; Krista Oswalt; Kelly Proulx; Jenn-Yun Tein; Sharlene Wolchik; Irwin Sandler
Journal:  J Clin Child Adolesc Psychol       Date:  2020-01-07
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.