Literature DB >> 35534829

A review of attitude research that is specific, accurate, and comprehensive within its stated scope: responses to Aarons.

Jessica Fishman1,2, Catherine Yang3, David S Mandell3.   

Abstract

Entities:  

Mesh:

Year:  2022        PMID: 35534829      PMCID: PMC9088109          DOI: 10.1186/s13012-022-01200-z

Source DB:  PubMed          Journal:  Implement Sci        ISSN: 1748-5908            Impact factor:   7.960


× No keyword cloud information.
Dear Editors-in-Chief (Implementation Science): Thank you for the opportunity to respond to Dr. Aarons’ letter regarding our article Attitude theory and measurement in implementation science: a secondary review of empirical studies and opportunities for advancement [1]. Dr. Aarons shares three main concerns with our review: (1) that there was a missing attribution to him, as the creator of the EBPAS; (2) whether the EBPAS measures attitudes; and (3) if our review should have included additional studies using the EBPAS. Below, we address each. First, Dr. Aarons states that we should have made an attribution to him when referencing the developers of the EBPAS. We did cite Aarons and colleagues in the version of the manuscript that was accepted for publication; it appears the journal mistakenly changed the reference. We hope that this can be rectified and thank Dr. Aarons for bringing it to our attention. Secondly, we respectfully disagree with Dr. Aarons about whether the EBPAS measures attitudes. As defined in the social psychology literature from which the term emanates, an attitude towards a behavior, such as using an evidence-based practice, refers to how strongly one believes that performing that behavior would have favorable or unfavorable consequences [2-4]. In implementation science, one’s attitudes towards a particular evidence-based practice would represent the perceived advantages and disadvantages of doing so [2-4]. There are many published methodological accounts of how to adapt validated measurement approaches, which differ fundamentally from EBPAS items and response options. In the 15-item version of the EBPAS [5], almost all items deviate conceptually from an attitude. As an example, several items ask respondents to report “how likely” they are to use EBP under different circumstances. In psychology, such items would be considered conceptually similar to behavioral intention, not attitudes [6, 7]. The more recent 36-item version of the EBPAS [8] also includes items that are conceptually closer to other psychological constructs. For example, the following item is conceptually related to self-efficacy: “I don’t know how to fit evidence-based practice into my administrative work.” We do not meant to diminish the importance of measuring constructs other than attitudes, but it is useful to distinguish between distinct psychological constructs, which have different roles in causal models predicting and changing behavior. We also disagree about the importance of measuring attitudes towards specific behaviors rather than general categories of behaviors. EBPAS items refer to general categories of behavior, such as trying “new practices,” “evidence-based practices,” or “evidence-based treatment” [5, 8]. Yet, over several decades, a large attitude literature in psychology has empirically demonstrated the advantages of measuring attitudes towards a specific behavior, rather than general categories of behavior [2-4]. Consistent with the results from psychology, the implementation science literature has started to document how practitioners’ attitudes can vary greatly among evidence-based practices [9-12]. For example, we have found that therapists’ attitudes vary towards different components of cognitive-behavioral therapy [10]. Given this variability, a measure of attitudes towards “evidence-based practice” or even “cognitive behavioral therapy” would sacrifice psychometric performance, including predictive validity [9-12]. Depending on the specific evidence-based practice, other psychological variables also can vary [9-12]. A related concern is that practitioners often lack familiarity with the phrase “evidence-based practice,” as Dr. Aarons and colleagues have acknowledged [5, 8]. The EBPAS directions state that “evidence-based practice” refers to any intervention that is supported by “empirical research,” but as Dr. Aarons and colleagues acknowledge, practitioners may still be confused, due to a lack of knowledge [5, 8]. For example, Aarons wrote, “Familiarity with the term ‘evidence-based practice’ among program managers was low” [5]. He added that respondents had “only a low level of familiarity with even the terminology of EBP,” including the descriptor “empirically supported treatment” [5]. Additionally, practitioners may not know which practices have been designated as “evidence-based,” “research-based,” or “empirically supported.” Depending on one’s knowledge, responses to the EBPAS may differ, which is problematic if the goal is to measure attitudes. Finally, Dr. Aarons points out that we did not include many studies that use the EBPAS. The EBPAS was featured only briefly in our review because our review was not focused on the EBPAS. Dr. Aarons suggests that our study selection was biased. We are surprised by this concern because we explicitly stated the inclusion and exclusion criteria, which relied on a rigorous, systematic review (authored by Aarons and colleagues [13]), and we adhered to the criteria. We agree with Dr. Aarons that future reviews could change the inclusion criteria and generate a different sample of studies. Indeed, in our review [1], we called for this additional research, and we welcome the replication with a different sample. Dr. Aarons correctly notes that there are thousands of articles that could be reviewed if different inclusion criteria were used. He suggests that the studies we reviewed are not representative of all implementation studies that are concerned with attitudes. Since we lack reviews of how these other implementation studies define or measure attitudes, whether our results are representative is an open question. Implementation science has been described as “somewhat elusive” because it has not yet developed distinct construct definitions [14]. Our review documents conceptual ambiguity and suggests that a definition of attitudes (from psychology) could be useful for implementation research [1]. Our review also provides specific examples of how implementation scientists measure attitudes in ways that differ from each other and from validated approaches used in social psychology. As implementation science strives to develop standardized measurement approaches, some of the rigorously developed methods from social psychology could offer valuable scientific opportunities.
  10 in total

1.  Mental health provider attitudes toward adoption of evidence-based practice: the Evidence-Based Practice Attitude Scale (EBPAS).

Authors:  Gregory A Aarons
Journal:  Ment Health Serv Res       Date:  2004-06

2.  The Utility of Measuring Intentions to Use Best Practices: A Longitudinal Study Among Teachers Supporting Students With Autism.

Authors:  Jessica Fishman; Rinad Beidas; Erica Reisinger; David S Mandell
Journal:  J Sch Health       Date:  2018-05       Impact factor: 2.118

3.  Factors Influencing the Use of Cognitive-Behavioral Therapy with Autistic Adults: A Survey of Community Mental Health Clinicians.

Authors:  Brenna B Maddox; Samantha R Crabbe; Jessica M Fishman; Rinad S Beidas; Lauren Brookman-Frazee; Judith S Miller; Christina Nicolaidis; David S Mandell
Journal:  J Autism Dev Disord       Date:  2019-11

4.  Instrumentation issues in implementation science.

Authors:  Ruben G Martinez; Cara C Lewis; Bryan J Weiner
Journal:  Implement Sci       Date:  2014-09-04       Impact factor: 7.327

5.  The Evidence-based Practice Attitude Scale-36 (EBPAS-36): a brief and pragmatic measure of attitudes to evidence-based practice validated in US and Norwegian samples.

Authors:  Marte Rye; Elisa M Torres; Oddgeir Friborg; Ingunn Skre; Gregory A Aarons
Journal:  Implement Sci       Date:  2017-04-04       Impact factor: 7.327

6.  Variability in clinician intentions to implement specific cognitive-behavioral therapy components.

Authors:  Courtney Benjamin Wolk; Emily M Becker-Haimes; Jessica Fishman; Nicholas W Affrunti; David S Mandell; Torrey A Creed
Journal:  BMC Psychiatry       Date:  2019-12-18       Impact factor: 3.630

7.  Assessing Causal Pathways and Targets of Implementation Variability for EBP use (Project ACTIVE): a study protocol.

Authors:  Emily M Becker-Haimes; David S Mandell; Jessica Fishman; Nathaniel J Williams; Courtney Benjamin Wolk; Katherine Wislocki; Danielle Reich; Temma Schaechter; Megan Brady; Natalie J Maples; Torrey A Creed
Journal:  Implement Sci Commun       Date:  2021-12-20

8.  A systematic review of empirical studies examining mechanisms of implementation in health.

Authors:  Cara C Lewis; Meredith R Boyd; Callie Walsh-Bailey; Aaron R Lyon; Rinad Beidas; Brian Mittman; Gregory A Aarons; Bryan J Weiner; David A Chambers
Journal:  Implement Sci       Date:  2020-04-16       Impact factor: 7.327

9.  Predicting implementation: comparing validated measures of intention and assessing the role of motivation when designing behavioral interventions.

Authors:  Jessica Fishman; Viktor Lushin; David S Mandell
Journal:  Implement Sci Commun       Date:  2020-09-28

Review 10.  Attitude theory and measurement in implementation science: a secondary review of empirical studies and opportunities for advancement.

Authors:  Jessica Fishman; Catherine Yang; David Mandell
Journal:  Implement Sci       Date:  2021-09-14       Impact factor: 7.327

  10 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.