| Literature DB >> 35857210 |
Cássia Leal da Hora1, Ana Carolina Sella2.
Abstract
Recommendations for using evidence-based practices have become increasingly common in services for individuals diagnosed with autistic spectrum disorder (ASD). The aim of this study was to conduct a narrative literature review to identify differences and similarities in evidence-evaluation criteria for group and single-subject designs that empirically support interventions for people with ASD. Data sources used in this analysis were reports and articles elaborated by different clearinghouses (i.e., National Autism Center, National Professional Development Center, and the National Clearinghouse on Autism Evidence and Practice). The criteria for evaluating evidence, as defined by these documents, contained specific components or quality indicators for each type of study design. The different criteria for evaluating evidence and for classifying the interventions (once evidence was evaluated) were identified and described. This manuscript discusses the need for (a) expanding the analysis beyond the evidence identified by different researchers and organizations such as the clearinghouses, (b) proposing interventions that are based not only on scientific evidence but also on social validity - which is directed by client idiosyncrasies, and (c) attention to the fact that EBPs should not be seen as static information regarding interventions with empirical support: evidence-based practices are the result of constant analysis of the intervention implementation data added to professional training and client values and context. Some additional issues and the study limitations are also presented.Entities:
Keywords: Autism spectrum disorder; Evidence-based practices; Social validity
Year: 2022 PMID: 35857210 PMCID: PMC9300806 DOI: 10.1186/s41155-022-00213-3
Source DB: PubMed Journal: Psicol Reflex Crit ISSN: 0102-7972
Types and definitions of evidence classification and criterion used by each systematic review
| Systematic reviews | Evidence classification/category | Definition | Criterion of empirical support |
|---|---|---|---|
| Sufficient evidence is available to confidently determine that an intervention produces favorable outcomes for individuals on the autism spectrum. That is, these interventions are established as effective | (a) ≥ 2 GD or 4 SSD studies with ≥ 12 participants for which there are no conflicting results or at least 3 group design or 6 SSD studies with a minimum of 18 participants with no more than 10% of studies reporting conflicting results. (b) GD and SSD may be combined. (c) Peer-reviewed studies with SMRS scores of 3, 4, or 5 pts. (d) Beneficial intervention effects for a specific target. (e) These may be supplemented by studies with lower scores on the SMRS | ||
| Although one or more studies suggest that an intervention produces favorable outcomes for individuals with ASD, additional HQ studies must consistently show this outcome before we can draw firm conclusions about intervention effectiveness | (a) ≥ 2 GD studies or 2 SSD studies with ≥ 6 participants for which no more than 10% of studies reporting conflicting results. Conflicting results are reported when a better or equally controlled study that is assigned a score of ≥ 3 reports either (a) ineffective intervention effects or (b) adverse intervention effects; (b) GD and SSD may be combined. (c) Peer-reviewed studies with SMRS scores of 2 pts. (d) Beneficial intervention effects reported for one DV for a specific target. (e) May be supplemented by studies with lower SMRS’s scores | ||
| There is little or no evidence to allow us to draw firm conclusions about intervention effectiveness with individuals with ASD. Additional research may show the intervention to be effective, ineffective, or harmful | (a) May or may not be based on research. (b) Beneficial intervention effects reported based on very poorly controlled studies (scores of 0 or 1 on the SMRS). (c) Claims based on testimonials, unverified clinical observations, opinions, or speculation. (d) Ineffective, unknown, or adverse intervention effects reported based on poorly controlled studies | ||
| Specifies that an intervention is identified as EBPs if supported by the number of studies specified in the “criterion” column | (a) 2 HQ experimental or quasi-experimental design studies conducted by 2 different research groups, or (b) 5 HQ SSD studies conducted by 3 different research groups and involving a total of 20 participants across studies, or (c) there is a combination of research designs that must include at least 1 HQ experimental/quasi-experimental design, 3 HQ SSDs, and be conducted by more than one researcher or research group | ||
| Some practices had empirical support from the research literature, but they were not identified as EBPs because it did not meet criteria established | Subdivided into (1) idiosyncratic behavioral intervention packages: behavioral packages not replicated across studies (i.e., combinations of EBPs and other practices to create interventions to address participant’s individual and unique goals) and (2) other practices with empirical support: focused intervention that there was an insufficient number of studies documenting efficacy, or there was a sufficient number of acceptable studies conducted by only one research group, or still, there were a sufficient number of SSD studies, but there were not a sufficient number of total participants across studies | ||
| Interventions that have clear evidence of positive effects with children and youth people with ASD | (a) ≥ 2 HQ GD studies conducted by at least 2 different researchers or research groups, or (b) 5 HQ SSD studies conducted by 3 different investigators or research groups and having a total of at least 20 participants across studies, or (c) 1 HQ GD study and at least 3 HQ SSD studies conducted by at least 2 different investigators or research groups (across the group and single-case design studies) | ||
| Interventions that (a) are manualized, (b) have unique features that create an intervention identity, and (c) share common features with other practices grouped within the EBP classification | |||
| Focused intervention practices, which did not yet have sufficient evidence to meet criteria for an EBP, but they had some empirical support | Not meeting criteria for EBP specially because there was an insufficient number of HQ studies providing support, few participants, or just one researcher or research group |
GD group design, SSD single-subject design, SMRS Scientific Merit Rating Scale, HQ high-quality, DV dependent variable
Documents/authors, year of publication, document title, and types of intervention that were analyzed
| Organization/authors | Year | Title | Types of intervention |
|---|---|---|---|
| National Autism Center (NAC) | 2009 | National Standards Report | Focused and comprehensive |
| National Professional Development Center (NPDC; Odom et al) | 2010 | Evidence-based practices for children, youth, and young adults with ASD | Focused |
| National Professional Development Center (NPDC; Wong et al.) | 2014 | Evidence-based practices for children, youth, and young adults with ASD | Focused |
| National Autism Center (NAC) | 2015 | Findings and conclusions: National Standards Project, phase 2 | Focused and comprehensive |
| The National Clearinghouse on Autism Evidence & Practice (NCAEP; Steinbrenner et al) | 2020 | Evidence-based practices for children, youth, and young adults with autism | Focused |
Period, number of years and months, and total number of EBPs described in each review
| Document | Period included in review | No. of years and months | Total number of years and months | Total no. of EBPs |
|---|---|---|---|---|
| NAC ( | Jan 1957 to Sep 2009 | 51 y, 9 m | 51 y, 9 m | 11 |
| NPDC (Odom et al., | Jan 1997 to Dec 2007 | 10 y | 10 y | 24 |
| NPDC (Wong et al., | Jan 1990 to Dec 2011 | 11 y | 21 y | 27 |
| NAC ( | Sep 2009 to Feb 2012 | 2 y, 5 m | 54 y, 2 m | 14 |
| NCAEP (Steinbrenner et al., | Jan 2012 to Dec 2017 | 5 y | 26 y | 28 |
a Sum of the time range covered in initial plus further reviews by each clearinghouse
Study designs found in each clearinghouse review
| Types of study design | NAC ( | NPDC (2010, 2014) | NCAEP (2020) |
|---|---|---|---|
| Randomized controlled trial (RCT) | x | x | |
| Sequential multiple assignment randomized trial (SMART) | x | ||
| Quasi-experimental design (QED) | x | x | |
| Regression discontinuity designs (RDD) | x | x | |
| Withdrawal of treatment (ABAB) | x | x | x |
| Concurrent multiple baseline | x | x | x |
| Multiple probe | x | x | x |
| Alternating treatments | x | x | x |
| Changing criterion design | x | x | x |
aIn NAC’s 2009 and 2015 reports, the types of group designs that composed the revised sample were not specified
Total number of studies with group and single-subject designs included in each review
| Document | Studies with group designs | Studies with single-subject designs |
|---|---|---|
| NAC ( | Not reported | Not reported |
| NPDC (2010) | Not reported | Not reported |
| NPDC (2014) | 38 | 408 |
| NCAEP (2020) | 165 | 806 |
Fig. 1Distribution of single-subject and group design studies per clearinghouse
Evaluation criteria for group designs in the SMRS (adapted from NAC 2015, pp. 2428)
NR not reported
Evaluation criteria for single-subject designs in the SMRS scale (NAC 2009, 2015)
SIG significant, NR not reported; a transition effects were minimized by balancing key variables (e.g., time of day) or condition discrimination
Classification of evidence assessment items used by NPDC/NCAEP for studies with group and single-subject designs
| NPDC (2010, 2014) NCAEP (2020) | |
|---|---|
| Group design quality indicators | Single-case design quality indicators |
| 1. Does the study have experimental and control/comparative | 1. Does the dependent variable align with the research question or purpose of the study? |
| 2. Were appropriate procedures used to increase the likelihood that relevant characteristics of participants in the sample were comparable across conditions? | 2. Was the dependent variable clearly defined such that another person could identify an occurrence or nonoccurrence of the response? |
| 3. Was there evidence for adequate reliability of key outcome measures? And/or when relevant, was inter-observer reliability assessed and reported at an acceptable level? | 3. Does the measurement system align with the dependent variable and produce a quantifiable index? |
| 4. Were outcomes for capturing the intervention’s effect measured at appropriate times (at least pre- and posttest)? | 4. Did a secondary observer collect data on the dependent variable for at least 20% of sessions across conditions? |
| 5. Was the intervention described and specified clearly enough that | 5. Was mean interobserver agreement (IOA) 80% or greater OR kappa of 0.60 or greater? |
| 6. Was the control/comparison condition(s) described? | 6. Is the independent variable described with enough information to allow for a clear understanding about the critical differences between the baseline and intervention conditions, or were references to other materials used if description does not allow for a clear understanding? |
| 7. Were data analysis techniques appropriately linked to key research questions and hypotheses? | 7. Was the baseline described in a manner that allows for a clear understanding of the differences between baseline and intervention conditions? |
| 8. Was attrition NOT a significant threat to internal validity | 8. Are the results displayed in graphical format showing repeated measures for a single case (e.g., behavior, participant, group) across time? |
| 9. Does the research report statistically significant effects of the practice for individuals with ASD for at least one outcome variable? | 9. Do the results demonstrate changes in the dependent variable when the independent variable is manipulated by the experimenter at three different points in time or across three-phase repetitions? |
| 10. Were the measures of effect attributed to the intervention? (No obvious unaccounted confounding factors) | |
Table 7 was adapted from the assessment forms on the quality of evidence presented in Appendix 2 of NPDC’s synthesis report (2014) and Appendix 1 of NCAEP’s report (2020). Boldface text refers to the contents present only in NCAEP’s (2020) version; ATD, alternating treatment design. Instructions for filling the form have been described only on NPDC’s (2014) report (“instructions: read each item and check the appropriate box. If you check “NO” at any time, the article will not be included as evidence for a practice”). In the NCAEP’s (2020) version, a checkbox with the answer “Not reported” was included in some items