Literature DB >> 36251646

Expert-generated standard practice elements for evidence-based home visiting programs using a Delphi process.

Emily E Haroz1, Allison Ingalls1, Karla Decker Sorby2, Mary Dozier3, Miranda P Kaye4, Michelle Sarche5, Lauren H Supplee6, Daniel J Whitaker7, Fiona Grubin1, Deborah Daro8.   

Abstract

BACKGROUND: States, territories, non-profits, and tribes are eligible to obtain federal funding to implement federally endorsed evidence-based home visiting programs. This represents a massive success in translational science, with $400 million a year allocated to these implementation efforts. This legislation also requires that 3% of this annual funding be allocated to tribal entities implementing home visiting in their communities. However, implementing stakeholders face challenges with selecting which program is best for their desired outcomes and context. Moreover, recent reviews have indicated that when implemented in practice and delivered at scale, many evidence-based home visiting programs fail to replicate the retention rates and effects achieved during clinical trials. To inform program implementers and better identify the active ingredients in home visiting programs that drive significant impacts, we aimed to develop an expert derived consensus taxonomy on the elements used in home visiting practice that are essential to priority outcome domains.
METHODS: We convened a panel of 16 experts representing researchers, model representatives, and program implementers using a Delphi approach. We first elicited standard practice elements (SPEs) using open-ended inquiry, then compared these elements to behavior change techniques (BCTs) given their general importance in the field of home visiting; and finally rated their importance to 10 outcome domains.
RESULTS: Our process identified 48 SPEs derived from the panel, with 83 additional BCTs added based on the literature. Six SPEs, mostly related to home visitor characteristics and skills, were rated essential across all outcome domains. Fifty-three of the 83 BCTs were rated unnecessary across all outcome domains.
CONCLUSIONS: This work represents the first step in a consensus-grounded taxonomy of techniques and strategies necessary for home visiting programs and provides a framework for future hypothesis testing and replication studies.

Entities:  

Mesh:

Year:  2022        PMID: 36251646      PMCID: PMC9576067          DOI: 10.1371/journal.pone.0275981

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.752


Introduction

Home visiting programs focusing on the needs of families and children began as an important strategy in the War on Poverty in the 1960s [1]. Evidence has mounted regarding the effectiveness of home visiting programs to improve maternal and child health, prevent child abuse and neglect, encourage positive parenting, and promote child development [2-5]. The American Academy of Pediatrics® endorses home visiting as a critical strategy to promote child wellbeing and build lifelong health [1]. Currently, $400 million per year through FY2022 has been allocated for home visiting programs through the federal Maternal, Infant, and Early Childhood Home Visiting (MIECHV) and Tribal MIECHV Programs. Many states also offer additional funding for home visiting beyond MIECHV funds because they recognize the importance of home visiting to family and community health and well-being. MIECHV funding is available for states, territories, nonprofit organizations, and tribal nations to provide voluntary home visiting. While there are currently 22 evidence-based models endorsed by the Home Visiting Evidence of Effectiveness federal review, MIECHV grantees may choose 1 or more of 19 models that have implementation support available [1, 6]. Tribal MIECHV grantees may adopt models that are either evidence-based or a promising approach, due to the limited amount of evidence of effectiveness of home visiting programs in tribal communities [7]. Family Spirit® is currently the only home visiting model that meets HHS criteria for evidence of effectiveness. Home visiting models and their respective interventions are diverse in their theoretical underpinnings and vary in their specific aims, target population, type of home visitors/providers and supervisors, content, schedule of visits, and means of administration. Yet, they are similar in that most provide education, support, and referrals to community services to families living in service areas. In the most comprehensive review of home visiting models, the Home Visiting Evidence of Effectiveness review has examined the available evidence on 50 different home visiting models [8]. Programs range from broad-based models that attempt to change multiple outcomes (e.g., Nurse Family Partnership®, Parents as Teachers®), to specific models focused more narrowly on targeted outcomes such as the Attachment and Biobehavioral Catch-up Intervention [9] or SafeCare® to prevent child neglect and physical abuse [10]. Other home visiting models target certain types of family structures (e.g., adolescent mothers), address the needs of specific cultural or community groups (e.g., American Indian populations), focus only on a certain developmental time period (e.g., Home Instruction for Parents of Preschoolers [11] or incorporate adjunct services for specific risks such as maternal depression [12]. Given the number of federally endorsed evidence-based home visiting (EBHV) programs (22 at time of publication), stakeholders across local, tribal, and state level organizations face significant challenges identifying and selecting which programs to use based solely on the evidence. Currently, these programs are listed on a central website, HomVEE [13]. The website provides a snapshot of each EBHV model, including details on the populations served and the evidence for certain outcomes. Those interested in implementing home visiting services face the real-world complexities of “evaluating the quality and relevance of competing programs and prioritizing certain outcomes over others in the context of limited time and resources for training and program delivery” [14]. For example, program selection can be complicated by the limited evidence base for the populations the program serves (e.g., American Indian and Alaska Native). Programs may also struggle with how to balance competing priorities. For example, if programs show benefits in one outcome category, but not other outcomes that are of interest, it may be complicated to choose which program to implement in the given service population. Moreover, several key implementation challenges have been identified as EBHV programs have moved from effectiveness trials to wide scale implementation. These include challenges with client engagement and retention, [4] balancing flexibility and fit during implementation with fidelity to the model that was previously tested [15, 16], and diminished average effect sizes as home visiting programs deliver services at scale [6]. In response to these challenges, program developers are being encouraged by both funders (e.g., MIECHV) and researchers, to unravel their approach and identify, with greater specificity, their key design elements, content, and service delivery strategies. From the research perspective, precision home visiting (PHV) has been championed as a priority to guide this program assessment effort [17]. The precision paradigm primarily emphasizes the importance of identifying the specific behavior change techniques within intervention models and the mechanisms of action by which those techniques promote targeted behaviors and ultimately drive outcomes [17]. While many EBHV models broadly identify target outcomes and the specific changes in participant knowledge, skills and attitudes associated in achieving outcomes, a precision lens asks program developers to apply a more specific and consistent framework in defining their efforts and monitor their implementation and impacts (Fig 1A). While certainly critical to explore in improving program effects, the components of a quality program often extend beyond the direct interaction between a provider and participant. Elements of a program’s intervention content, implementation structure, and the required skills and personal characteristics of home visitors also contribute to an intervention’s success (Fig 1B).
Fig 1

Expanded version of the home visiting paradigm.

Adapted from Duggan et al., 2021 [18].

Expanded version of the home visiting paradigm.

Adapted from Duggan et al., 2021 [18]. In an effort to both simplify the decision process for implementers and potentially drive a better understanding of which intervention techniques and delivery methods should be prioritized for future studies, we sought to identify “standard practice elements” (SPEs) across a broad range of strategies that shape the structure of EBHV models. We defined SPEs as the techniques and strategies used in early childhood home visiting as part of the larger intervention. Our approach to identify SPEs was guided by a distillation and matching model, [19, 20] which posits that interventions are conceptualized as composites of individual strategies that can be identified and then matched to client, setting, or other factors that might be relevant for selecting which strategies are most appropriate and when. Due to the complexities in specifying techniques across a set of models that are highly diverse in their theoretical orientation, discipline, and scope, as a first step in this process, we used a Delphi approach to generate a consensus-grounded taxonomy of techniques and strategies that are used across EBHV models in the United States (US). Briefly, Delphi methods are structured group communication to identify consensus among a group of experts [21]. Our Delphi approach including recruiting a panel of experts and model developers who met virtually, generated ideas using open-ended techniques, and refined these ideas in an interactive process. The overall goals were to: 1) synthesize the knowledge accumulated over decades of research and practice in home visiting programs to identify SPEs models considered essential to their success; and 2) create a taxonomy of early childhood EBHV SPEs to inform service delivery process, help implementers with identifying and selecting interventions, and inform hypotheses for future PHV research. We specifically focused these efforts on both non-Tribal and Tribal Home Visiting (THVs) contexts, because of the overlapping funding streams, but different processing for selecting EBHVs, and with respect for Tribes as sovereign nations within the US.

Methods

The Delphi process [22] was conducted over a nine-month period beginning in May 2020 and ending in January 2021 (Fig 2).
Fig 2

Delphi process.

Gray boxes represent preparatory steps by the research team; white rectangular boxes outlined in black illustrate core events for collecting input and decision-making; and bolded text represents the development of a taxonomy of evidence-based home visiting standard practice elements. The members of the research are not included in the numbers of respondents to surveys.

Delphi process.

Gray boxes represent preparatory steps by the research team; white rectangular boxes outlined in black illustrate core events for collecting input and decision-making; and bolded text represents the development of a taxonomy of evidence-based home visiting standard practice elements. The members of the research are not included in the numbers of respondents to surveys.

Participants

Study team members employed purposive sampling [23] to identify an expert panel including members representing early childhood home visiting researchers, allied experts from home visiting models, and leaders in THV development and implementation [23]. Participants came from diverse geographic areas of the US and represented several sectors of the home visiting field including model developers, implementers, and researchers. At the end of March 2020, the study team emailed an invitation to participate to 22 home visiting experts. Three potential panel members were unable to participate due to other commitments. Three model developer representatives never responded to the invitation. All invited leaders in THV agreed to participate in the study. Ultimately, there were 16 members of the expert panel, each of them participating in some capacity during all three rounds of the Delphi process. Panel members held multiple rolls, including n = 10 (63%) who identified as a researcher, n = 7 (43%) who identified as a model representative, and n = 7 (43%) who identified as a tribal stakeholder.

Procedures

Our approach was guided by the Delphi method, which was developed by the RAND Corporation in the mid-20th century as a way for researchers to reliably build consensus of a group of experts [24, 25]. Because this study took place during the COVID-19 pandemic, we the Delphi approach with the three rounds of questionnaires and discussion carried out via a series of Qualtrics surveys [26] and Zoom video conferences, [27] coupled with email communication as necessary. Since the expert panel was only asked questions about home visiting in general and no personal information was solicited, this process was exempt from oversight by the Institutional Review Board [28]. As such, while participants did not provide formal informed consent, they did agree to participate prior to the process as part of the virtual convenings and surveys. At the start of the expert elicitation process, participants were divided into two panels: one for researchers and model developers, and one for leaders in THV. As such, two project launch meetings were conducted via Zoom video conference in May 2020 to provide expert panel members with an overview of the project prior to launching into Round 1 of the Delphi process.

Round 1

Both expert panels received the same survey. In this survey, free lists were used to elicit SPEs in EBHV. Free listing was selected as a feasible approach that would preserve open-ended inquiry. Participants were provided with a definition of SPEs—“the techniques and strategies used in early childhood home visiting as part of a larger intervention”—and were asked to list out all elements they considered important in early childhood home visiting. There was room to enter up to 20 individual SPEs, in addition to an open-text field for additional entries. Participants were then asked to identify which free-listed SPEs are critical to home visiting programs that serve tribal communities. Lastly, participants were asked to list any additional SPEs specific to THV but not already mentioned previously. There was space for up to five individual tribal SPEs, in addition to an open-text field for additional entries. Data collection lasted for 3 weeks, concluding when the target sample size (n = 16) was reached. To view the full first survey, refer to S1 File. Analysis of Round 1. Responses to the first survey were exported and combined into one Excel file with two tabs labeled “general elements” and “tribal elements.” Each tab in the Excel file had four columns: element (i.e., standard practice element label), frequency (i.e., count of respondents who free-listed the same or similar element), respondent initials (so we could follow-up if needed), and other similar descriptors (so we could see how each element was coded). Study team members read through each respondent’s free-listed SPEs and decided whether each was distinct or could be combined with another element already listed in the Excel file. For example, the element “linkage to services” was listed by 12 respondents using different descriptors such as “connecting families to resources,” “making a referral to appropriate community service,” and “resource connection,” among others. If it was unclear that listed elements were referring to the same thing, the study team kept them separate. Two additional video conferences were held with each of the expert panels in June 2020 to further clarify and refine the list generated through the initial study. Study team members guided these group discussions to identify commonalities, clarify meaning, and solicit additional thoughts. In addition, during these meetings each panel independently concluded that there needed to be a broader level of categorization of SPEs to capture the multi-level hierarchy of home visiting (e.g., what are elements delivered during the visit, what are elements for home visitors training and background, etc.). As a result, via email feedback, panel members were asked to build consensus on broad category names and definitions to be used in future Qualtrics surveys.

Round 2

For Round 2, the expert panels were combined, and they remained combined through Round 3. Thus, all email communication, subsequent surveys, and video conferences were held with the entire group of experts. In September 2020, expert panel members were asked to complete a second Qualtrics survey. In this survey, all previously elicited SPEs were presented to participants, and they were asked to group these into the broad categories defined in Round 1. Because each SPE was listed as a multi-select question type, they were able to be grouped into multiple broad categories. At the end of the survey, participants had the option to add additional SPEs that were not already included in the master list. Data collection lasted for 3 weeks, and responses were collected for n = 14 expert panel members. The full survey is available in S2 File. The second phase of Round 2 included a second video conference held with all expert panel members at the beginning of October 2020. The study team presented the results of the second survey and asked follow-up questions to further collapse or clarify SPEs. This included re-wording of some elements and the addition of elements to better capture specific practice activities. These additions were done through consensus processes across panel members. Due to several elements that continued to need clarity, expert panel members were asked to complete a third, brief Qualtrics survey to help finalize the list of SPEs. Participants were also asked for further input (i.e., “Is this definitely a standard practice element in home visiting?”) on free-listed elements that were initially provided by only one panel member in the first Qualtrics survey. Data collection lasted for a little over 1 week, concluding when the target sample size (n = 16) was reached. To view the full survey, refer to S3 File. Data analysis for Round 2. Responses to the first survey in Round 2 were exported, and a study team member tallied the number of responses for each SPE under each broad category. The full study team then met to make decisions about final SPE-broad category matching. A rule was created that for any SPEs that were added to a broad category by most respondents (greater than 50%), it would be considered assigned to that category. For example, the SPE “teaching goal setting skills to parents” was voted by nine expert panel members as belonging to the broad category “home visiting content.” Since that is more than half of the 14 total respondents to that survey, it was coded in that category for the results. Ties were discussed as a study team. Responses for the second survey in Round 2 were similarly exported and analyzed with SPEs only included in the round 2 list if greater than 50% of respondents endorsed their inclusion.

Round 3

To align with other efforts in the PHV field, [17, 29] and to keep consistent with several home visiting program approaches aimed at changing parental behavior, at the beginning of this final round of the Delphi process, study team members engaged in an internal process to match expert-generated SPEs to the Behavior Change Technique Taxonomy [30]. The Behavior Change Technique Taxonomy was developed by researchers in the United Kingdom through a Delphi process in order to provide structure for reporting on behavior change interventions. Behavior change techniques (BCT) are strategies that help an individual change their behavior to help achieve better health. Identifying and understanding the mechanisms of BCTs are the current focus of most PHV research. Our approach to defining SPEs was intentionally broader–focusing not just on individual change, but techniques and strategies used broadly in early childhood home visiting programs to effect change. First, a full list of SPEs and BCTs was created, along with definitions and examples for each item in the list. The Behavior Change Technique Taxonomy had published definitions and examples, but the study team created definitions and examples for the panel generated SPEs based on our panel discussion notes and from experience in the home visiting field. The list of SPEs and BCTs was input into a Qualtrics survey using the “Pick, Group, and Rank” question type to facilitate independent coding between two internal coders from the study team. Each coder independently filled out the survey by dragging and dropping panel generated SPEs into matching BCTs. The two coders and Principal Investigator discussed the results of the survey and came to consensus about any disagreements in the coding. Afterward, a final list of all SPEs and BCTs was created. For the final Round 3 of Delphi surveys, panel members completed a Qualtrics survey to prioritize SPEs and BCTs to outcome domains in evidence-based home visiting (S4 File). Outcome domains selected for this project were adapted from the Home Visiting Evidence of Effectiveness review and the Pew Home Visiting Data for Performance Initiative [31, 32] with two additional tribal outcome domains that were of interest to the study team. Domains included: 1) promotion of healthy physical child development (e.g., healthy eating, breastfeeding); 2) promotion of social-emotional learning; 3) improving cognitive development (e.g., language development); 4) linkages and coordination of referrals for other community resources and supports; 5) reductions in maternal distress (e.g., depression, anxiety, stress); 6) reductions in substance use; 7) promotion of positive parenting practices; and 8) reductions in child maltreatment. Domains specific to THV included: 9) reductions in tribally related health disparities (e.g., Type 2 Diabetes, Mental Health); and 10) promotion of connection to culture. Because the list of SPEs and BCTs was so long, each expert panel member was randomly assigned to complete the matching survey for only four outcome domains. Ratings were chosen based on methods used in a previous study by McLeod et al. [33] and included: 0 = “Not necessary,” 1 = “Useful, but not essential,” and 2 = “Essential” to achieve the outcome. Expert panel members rated each SPE and BCTs according to their importance in achieving the outcome domain. Data collection lasted for 7 weeks, concluding when the target sample size (n = 16) was reached. To view the full final survey, refer to S5 File. Data analysis for Round 3. Data were exported and combined in Excel for each outcome domain. For each element, frequency of ratings (e.g., Not necessary, Useful, Essential) and average ratings were tallied for each outcome domain. Average ratings were used as an indicator for strength of the relationship between the element and the outcome which were analyzed as heatmaps to help with visualization of the results.

Final taxonomy

Final taxonomies of SPEs and BCTs were developed for each outcome domain. This was done by retaining any SPE or BCT that was rated as “Essential” to that outcome domain by 50% or more of respondents.

Results

Round 1

In total, there were 58 SPEs free listed by expert panelists in the first Qualtrics survey, along with 10 unique THV SPEs, creating a total of 68 free-listed EBHV SPEs. Of these, 51 general SPEs were also thought to be critical to THV. After the Zoom video conference to discuss survey results, the list was revised to comprise a new total of 69 SPEs, including 11 THV SPEs. Group consensus was reached on n = 5 broad category names, definitions, and examples (See Table 1).
Table 1

Frequency of SPEs* by category domains.

Category DomainaConsensus DefinitionNumber of Round 2 SPEsbExamples of SPEs
Model philosophy The tenets of an evidence-based home visiting program that drive the other components of home visiting, including a model’s theory of change and cultural lifeways.6Model is based on a parenting framework; Home visitor understands, affirms, and respects cultural identity of clients (THV)*
Program implementation Strategies, techniques, structures, and processes (e.g. program design) at the model/organizational/site level that relate to ensuring successful delivery of the EBHV, including buy-in (community, agency, home visitor), staff training, supervision, fidelity, funding, and payment structures.15Providing clients with linkage to services; Program strengthening of service coordination; Culturally attuned and responsive approach with all staff training, strategies, materials (THV)
Home visiting content The content (i.e. the “what”) that is conveyed by home visitors to their clients during service delivery.6Teaching relaxation/ self-regulation skills to parents; Teaching goal setting skills to parents
Home visiting process/ deliveryThe strategies and techniques home visitors use during service delivery with their clients.24Child assessment and screening; Active listening; Role play/ coaching; Home visitor shares resources in client’s Native language (THV)
Home visitor personal characteristicsHome visitor characteristics that may contribute to improvement in client outcomes but that aren’t typically specified in a model’s theoretical/ conceptual framework or their content. These are aspects of the home visitor that are not an explicit part of the model.11Home visitor flexibility/ adaptability; Reliable home visitor; Home visitor sense of humor

* SPE = standard practice element; THV = tribal home visiting

a Categories are not listed in order of priority, and they are not listed in a hierarchy.

b Not mutually exclusive

* SPE = standard practice element; THV = tribal home visiting a Categories are not listed in order of priority, and they are not listed in a hierarchy. b Not mutually exclusive

Round 2

After the second Qualtrics survey, the list of SPEs included n = 62 individual elements (51 general, and 11 unique to THV) categorized into n = 5 broad categories. Results from the group discussion and third Qualtrics survey lead to a reduction in total number of SPEs (n = 45, including 5 unique to THV). The frequencies by broad category name for the final round 2 survey can be found in Table 1. The category with the most SPEs assigned to it was home visiting process/ delivery (n = 24), followed by program implementation (n = 15) and home visitor personal characteristics (n = 11). Model philosophy and home visiting content each had six SPEs assigned to them.

Round 3

After internal matching of SPEs to BCTs, the draft taxonomy grew to 38 SPEs, 10 SPEs/BCTs that overlapped, and 83 BCTs. Expert panel members used this list to rate each element by importance to achieving each outcome domain. Fig 3 presents a heatmap that displays the average rating of 28 of the elements by outcome domain, with darker shade indicating higher average rating across participants. The heatmaps for the remainder of the elements can be found in Supporting information. Average ratings range from 0 to 2.0. The highest rated element across all outcome domains was “relationship building” while nine elements were rated zero across all outcome domains. All the elements that were rated zero across all outcome domains were from the BCT.
Fig 3

Heatmap of SPEs and BCTs and their relative importance across outcome domains.

Using a cut-off point of 50% or more respondents rating the element as ‘Essential’ to classify the element as essential to changing the specified outcomes, the number of SPEs and BCTs classified as essential by domain can be found in Table 2. Maternal distress included the greatest number of SPEs and BCTs with 51 elements (34 SPEs and 17 BCTs) classified as essential. No BCTs were classified as essential to increasing referrals as all 22 elements came from the expert generated SPEs.
Table 2

SPEs and behavioral change techniques classified by 50% or more respondents as essential by domain.

Outcome DomainNumber of SPEs + Behavior Change TechniquesNumber of unique SPEsNumber of unique Behavior Change TechniquesNumber of non-essential SPEsNumber of non-essential Behavior Change Techniques
Child physical Development4729111072
Child social emotional learning40373980
Child cognitive development352781975
Increased referrals222202483
Maternal distress5134171266
Parental substance use3020102673
Positive parenting372981775
Prevention of maltreatment4535101173
Tribal health disparities332761977
Connection to culture252322381
Altogether, six SPEs were rated essential across all outcome domains while 54 elements were not rated as essential across any of the outcome domains (Table 3). Five out of the six SPEs rated as essential for all outcome domains represented key personal characteristics and skills of home visitors (Table 3). The one additional SPE not related to home visitor personal characteristics was: “Culturally attuned and responsive approach with all staff training, strategies, materials” (Table 3). Content based home visiting techniques that emerged as highly rated across 90% of outcomes included 1) Providing clients with linkages to services; and 2) Maternal risk screening and assessment (Table 3). Across all ten outcome domains of home visiting, 75 SPEs were classified as essential to achieve at least one outcome, but only fifteen elements were classified as essential across at least nine of the outcome domains (Table 3).
Table 3

Elements classified as essential across 90% or more of the outcome domains.

ElementChild physical developmentChild social emotional learningChild cognitive developmentIncreased referralsMaternal distressParental substance usePositive parentingChild maltreatmentTribal health disparitiesConnection to culture
1. Relationship building 
2. Responsiveness and sensitivity 
3. Home visitor demonstrates cultural humility 
4. Home visitor adaptability with respect to setting and participation 
5. Home visitor understands, affirms, and respects cultural identity of clients 
6. Culturally attuned and responsive approach with all staff training, strategies, materials 
7. Providing clients with linkage to services X
8. Maternal risk assessment and screening  X
9. Proper workloads of staff/ supervisors  X
10. Criteria for staff selection are appropriate for population served  X
11. Home visitor flexibility/ adaptability  X
12. Reliable home visitor  X
13. Active listening  X
14. Empathetic communication  X
15. Home visitor content mastery  X
16. Culturally informed knowledge of the home visitor X
Fig 3 shows a heatmap for 28 SPEs rated across all outcome domains and a sum of their ratings across all domains except for the two tribal home visiting specific outcomes. Remaining heatmap figures can be found in Supporting information. Again, many of the most highly rated SPEs and BCTs relate to home visitor personal characteristics or non-specific skills such as “Relationship building.” Additional content based home visiting techniques that emerged as highly rated included: 1) Creating action plans based on child screenings; and 2) Teaching problem solving skills (Figs 3 and S1). Home visitor content mastery was also highly rated despite relatively lower content based home visiting techniques identified as essential across outcomes. This may be because some models include specific content related to a limited set of outcomes (e.g., SafeCare focuses content only on prevention of child maltreatment). All but one of the 54 elements that never achieved an essential rating by more than 50% or respondents were from the behavior change taxonomy, except for “Home visitor sense of humor,” which was an SPE.

Discussion

The development of a consensus-grounded taxonomy of techniques and strategies used across EBHV programs sets the foundation for necessary next steps in the implementation and scaling EBHV programs. First, the home visiting research field has begun to explore the potential benefits of conducting PHV studies to strengthen programs and outcomes for families. A necessary part of PHV research is to identify the core techniques and strategies that may drive outcomes. Without consensus on which techniques and strategies are most promising to investigate, the PHV research may end up with a proliferation of studies with an inability to replicate findings and draw cohesive conclusions. Second, identifying the core techniques and strategies has the potential to help those identifying and selecting EBHV interventions that best meet their communities’ and families’ needs. For example, our findings suggest that, regardless of the home visiting model used, focusing on the qualifications and skills of home visitors is critical to maximizing outcomes. Moreover, programs can use Fig 3 to identify key features of home visiting programs that are considered important to include in programming for the selected outcome(s). Our results indicated that across 10 common outcome domains of home visiting, 75 SPEs were classified as essential for home visiting models to include to achieve at least one outcome. While we included ratings for BCTs, very few BCTs were identified as essential to achieving outcomes in home visiting. The highest rated BCTs overlapped with expert generated SPEs and included: “Information sharing (by home visitor to client)” and “Teaching problem solving skills to parents.” Moreover, 10 of the SPEs identified as essential across almost all outcome domains related to home visitor personal characteristics and non-specific trained skills rather than content-based techniques or delivery methods. These results suggest that while some content-based home visiting techniques and delivery methods are essential to achieving outcomes, the process by which content is delivered and who it is delivered by may drive change in priority outcomes. This finding is consistent with findings in the psychotherapy field [34]. Future work is needed to identify important characteristics of home visitors, and training approaches for these home visitors in the skills that are hypothesized to drive effective programming. From the outset of this process, the expert panel members emphasized a need to think about how SPEs fit into broader based categories of program implementation and effectiveness. The five broad based categories that were identified included: 1) Model philosophy; 2) Home visiting content; 3) Home visitor personal characteristics; 4) Home visiting process/delivery; and 5) Program implementation. These broad categories are consistent with the home visiting precision paradigm and point to a need for further studies that focus not only on the active ingredients of a particular model, but also consideration for how implementation science principles impact the context of home visiting. The context of the home visiting program plays a key role in affecting outcomes. Implementation outcomes are known in the literature to affect participant outcomes, particularly prevention of child maltreatment [35]. There is a need to understand what types and how implementation strategies can contribute to addressing the current challenges in the home visiting field. Despite behavior change being considered a key activity in home visiting programs through home visitor support of parents to engage in healthy behavior and positive parenting, [36] few of the existing behavior change strategies overlapped with the expert-developed list of SPEs and almost none were considered essential to achieving home visiting outcomes across any outcome. This is notable, particularly in the context of the PHV field. Much of existing PHV work has been focused on a paradigm that behavior change is the ultimate goal in most home visiting programs. Therefore, the existing Behavior Change Technique Taxonomy should provide a rather exhaustive list of the elements necessary to achieve this change [30]. However, experts who participated in this Delphi process clearly did not agree that BCTs drive positive change in home visiting. Changing behavior may be more complicated than specific techniques and may rather focus on a combination of characteristics, techniques, and implementation context that ultimately translates to effective programming. While our investigation focused on ten outcomes of home visiting programs, it is possible that BCTs may be particularly relevant to certain types of outcomes such as changing unhealthy eating habits [37-39] which were not captured in our approach. Our findings also point to the need to broaden our consideration of active ingredients to test in PHV to investigate and include the SPEs identified here. Due to the unique contexts and funding streams for THV efforts, and the need to prioritize populations who face significant health disparities, our process focused explicitly on identifying SPEs for eight standard outcomes with the addition of two outcomes that were determined to be particularly relevant for THV contexts: 1) Promoting a connection to culture; and 2) Reducing tribally relevant health disparities. The SPEs that focused specifically on the home visitor’s cultural knowledge, awareness, humility, and sensitivity were considered particularly important for these outcomes. This is consistent with the broader prevention literature in tribal communities, [40] but also relevant when working on prevention efforts in any community [41]. Bridging the research-to-practice gap by summarizing EBHV models’ “standard practice elements” and, ultimately, identifying those that are common or overlapping may be particularly useful for several reasons. First, it may better inform practice because it begins to build a framework for stakeholders that would more easily enable them to personalize their programming to ensure all desired elements are available. While we did not perform a meta-analysis, our approach may help implementers identify where models overlap or are distinct from one another and use this information to select or tailor their own approaches. Second, our approach does not provide information about the impact of each SPE on outcomes, but instead provides a starting point for researchers to identify which elements might be tested further in their pursuit of identification of empirically supported active ingredients. And third, our hope is to contribute to a shared vision of improving the health and wellbeing of families and communities. Identifying common practices across EBHV models can ultimately lead to more structured tailoring of interventions to better fit the unique needs of participating families. The SPEs may provide a menu of choices from which to pick to personalize a home visiting program for each family and their unique needs. For example, if a mother scores highly on mental health screening, one of the major goals of their home visiting program might be teach coping skills within the program and to refer and connect them for services. Similarly, if the home visitor is concerned about the child’s development, administering screening, and developing an action plan may be a priority. These practice elements could also be combined if there are multiple, competing priorities. While more research is needed to empirically prove the effectiveness of different SPEs for different outcomes, for now, stakeholders, including program administrators and home visitors, could review home visiting curriculums to identify priority outcomes and ensure the SPEs for those outcomes are included in their current practice.

Limitations

There are limitations to consider when interpreting the results of this study. First, the evidence base for each SPE was not considered because the purpose was to identify a wide range of possible elements that could be considered. Second, while we attempted to link these elements to a relevant theory in the behavior change literature, other theories or conceptual frameworks may also be considered and might further strengthen this approach. Third, while we attempted to include experts representing a variety of stakeholder perspectives, our sampling is not fully representative of the home visiting field. For example, we did not include the voice of home visitors or clients, who may place value on other elements. The list of combined SPEs and BCTs and number of outcomes was large, preventing coding by all participants. Further refinement of the final taxonomy and by a wider audience may be helpful. Finally, we did not evaluate how well or to what extent existing evidence-based home visiting models incorporate the SPEs identified. Exploring to what extent model include these SPEs would be an important future research direction.

Conclusions

This project represents a preliminary step in identifying SPEs that cut across EBHV programs. By starting with open-ended processes (i.e., free listing) we were able to elicit a wide range of potential SPEs based on extensive expertise in the field. This type of inquiry is especially helpful in generating hypotheses. The field of PHV is dedicated to identifying active ingredients but knowing where to start in terms of which active ingredients to investigate and why is a critical first step in any scientific endeavor. In the short term, models can use the information included in this manuscript to ensure their programs include the essential elements. Next steps for further research might involve refinement of this initial taxonomy with a broader range of stakeholders and ultimately coding of actual program materials to identify common practice elements across models. The move towards identifying SPEs across models holds potential to overcome existing implementation challenges and ultimately strengthen impact on children’s and parents’ health, increase home visitor efficiency, and lower program cost while increasing economic and societal benefit.

Expert panel survey 1—free listing.

(PDF) Click here for additional data file.

Expert panel survey 2—categorizing SPEs.

(PDF) Click here for additional data file.

Expert panel survey 3 –further input on SPEs.

(PDF) Click here for additional data file.

Internal team survey matching to BCTs.

(PDF) Click here for additional data file.

Expert panel survey 4—prioritizing SPE.

(PDF) Click here for additional data file.

Data file from Delphi round 1.

(XLSX) Click here for additional data file.

Data file from Delphi round 2.

(XLSX) Click here for additional data file.

Data file from Delphi round 3.

(XLSX) Click here for additional data file. (TIF) Click here for additional data file. (TIF) Click here for additional data file. (TIF) Click here for additional data file. (TIF) Click here for additional data file. 1 Jun 2022
PONE-D-21-34938
Expert-generated standard practice elements for evidence-based home visiting programs using a modified Delphi process
PLOS ONE Dear Dr. Ingalls, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
Please note that we have only been able to secure a single reviewer to assess your manuscript. We are issuing a decision on your manuscript at this point to prevent further delays in the evaluation of your manuscript. Please be aware that the editor who handles your revised manuscript might find it necessary to invite additional reviewers to assess this work once the revised manuscript is submitted. However, we will aim to proceed on the basis of this single review if possible. 
Your manuscript has been assessed by an expert reviewer, whose comments are appended to this letter. The reviewer has requested clarification on a few aspects of the reporting and made some suggestions for fruitful areas to enrich and expand your introduction and discussion to maximise the impact of your work. Please ensure you address each of the reviewer's points carefully in your response to reviewers document, and revise your manuscript accordingly. Please submit your revised manuscript by Jul 10 2022 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. Please include the following items when submitting your revised manuscript: A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Joseph Donlan Editorial Office PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pdf. 2. Please provide additional details regarding participant consent. In the ethics statement in the Methods and online submission information, please ensure that you have specified (1) whether consent was informed and (2) what type you obtained (for instance, written or verbal, and if verbal, how it was documented and witnessed). If your study included minors, state whether you obtained consent from parents or guardians. If the need for consent was waived by the ethics committee, please include this information.​ If you are reporting a retrospective study of medical records or archived samples, please ensure that you have discussed whether all data were fully anonymized before you accessed them and/or whether the IRB or ethics committee waived the requirement for informed consent. If patients provided informed written consent to have data from their medical records used in research, please include this information. 3. We note that the grant information you provided in the ‘Funding Information’ and ‘Financial Disclosure’ sections do not match. When you resubmit, please ensure that you provide the correct grant numbers for the awards you received for your study in the ‘Funding Information’ section. 4. Please review your reference list to ensure that it is complete and correct. If you have cited papers that have been retracted, please include the rationale for doing so in the manuscript text, or remove these references and replace them with relevant current references. Any changes to the reference list should be mentioned in the rebuttal letter that accompanies your revised manuscript. If you need to cite a retracted article, indicate the article’s retracted status in the References list and also include a citation and full reference for the retraction notice. [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: This is a well-written and informative article describing the use of a modified Delphi process to generate expert opinions on standard practice elements (SPEs) and behavior change techniques necessary to achieve home visiting outcomes. Overall, I believe the article has much to offer the field of home visiting as explicit discussion of behavior change techniques tend to be limited. My biggest recommendation is to use the wealth of data collected to better chart a course for future work. The paragraph before Limitations gives several potential implications and future directions based on the study findings—the first suggesting that study results can “inform practice” by helping stakeholders personalize their programming. This seems to be one of the biggest future directions—i.e., how can home visiting models create behavior change frameworks that are personalized (keeping with the precision paradigm) and incorporate SPEs that align with behavior change frameworks. I suggest adding more details on exactly how this could be done by home visiting practitioners. Home visiting models that are considered evidenced-based may be reluctant to modify their approaches and the process of personalizing home visiting delivery seems like it would be a very involved process. I think this manuscript would have even greater impact if it expanded on this important point of informing practice and provided more details on what this process should look like, including stakeholders that need to be involved, how evidence-based models could be implemented using a personalized framework, and how home visitors would need to be trained to use a personalized approach. Without these additional details, I worry this article—while very informative and useful for the home visiting field—may not have the desired impact because it does not leave home visiting practitioners with enough food for thought on how their models/programs need to be modified. Other specific comments are noted below: The Introduction is well-written and gives the reader a good sense of the landscape of evidence-based home visiting (EBHV) programs. I do think the comment on lines 80-84 about the challenges in identifying and selecting programs could benefit from slight rewording. My recollection is that the HV evidence review (HOMEVEE) provides a snapshot of each EBHV model, with details on populations served, evidence related to various maternal and child health outcomes, etc. The authors should note this and then perhaps give a bit more explanation on why that level of detail is still insufficient/confusing to stakeholders looking to select a model. Lines 92-94: Can you indicate who is encouraging program developers to “unravel their approach and identify with greater specificity their key design elements…” Is this something that is being encouraged by MIECHV, researchers, policy-makers, or some combination? The top of Figure 1 is blurry and hard to read. For readers unfamiliar with Delphi and group consensus techniques, it might be helpful to add another sentence or two when Delphi is first introduced to give a sense of its key characteristics and how it is typically used. A reference would also be helpful. Related point—in the Procedures section there is reference to modifying the Delphi process, but I don’t think the reader has enough information on what a “standard” Delphi looks like to be able to understand the modifications made. The last part of the Introduction after the paper’s goals are introduced is a little hard to follow. The second goal as written is quite long and there appears to be a typo in the last sentence (“because of the overlapping but funding…”). It is reported that 16 individuals comprised the expert panel, but when describing them it appears 10 were researchers, 7 model reps, and 7 tribal stakeholders. I’m assuming that the three categories overlapped—i.e., someone could be part of two groups—but that should be clarified so that these n’s are aligned. As I understand the way the Delphi process took place, respondents were asked to generate lists of behavior change techniques necessary for achieving various maternal and child health outcomes relevant to home visiting. What seems to be missing in this approach, however, is understanding how well/to what extent existing evidence-based models incorporate these behavior change technique since it doesn’t appear respondents were asked to use that frame in their responses. I think this is an important limitation and future direction that needs to be addressed. ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step.
30 Aug 2022 Please see attached file with full response to reviewers (Word file name Response to Reviewers) Submitted filename: Response to Reviewers.docx Click here for additional data file. 27 Sep 2022 Expert-generated standard practice elements for evidence-based home visiting programs using a Delphi process PONE-D-21-34938R1 Dear Dr. Ingalls, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Tanya Doherty, PhD Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #1: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: (No Response) ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No ********** 6 Oct 2022 PONE-D-21-34938R1 Expert-generated standard practice elements for evidence-based home visiting programs using a Delphi process Dear Dr. Ingalls: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Professor Tanya Doherty Academic Editor PLOS ONE
  21 in total

1.  Growing from Our Roots: Strategies for Developing Culturally Grounded Health Promotion Interventions in American Indian, Alaska Native, and Native Hawaiian Communities.

Authors:  Karina L Walters; Michelle Johnson-Jennings; Sandra Stroud; Stacy Rasmus; Billy Charles; Simeon John; James Allen; Joseph Keawe'aimoku Kaholokula; Mele A Look; Māpuana de Silva; John Lowe; Julie A Baldwin; Gary Lawrence; Jada Brooks; Curtis W Noonan; Annie Belcourt; Eugenia Quintana; Erin O Semmens; Johna Boulafentis
Journal:  Prev Sci       Date:  2020-01

2.  Identifying and selecting the common elements of evidence based interventions: a distillation and matching model.

Authors:  Bruce F Chorpita; Eric L Daleiden; John R Weisz
Journal:  Ment Health Serv Res       Date:  2005-03

Review 3.  Common elements of adolescent prevention programs: minimizing burden while maximizing reach.

Authors:  Maya M Boustani; Stacy L Frazier; Kimberly D Becker; Michele Bechor; Sonya M Dinizulu; Erin R Hedemann; Robert R Ogle; Dave S Pasalich
Journal:  Adm Policy Ment Health       Date:  2015-03

4.  Early Childhood Home Visiting.

Authors:  James H Duffee; Alan L Mendelsohn; Alice A Kuo; Lori A Legano; Marian F Earls
Journal:  Pediatrics       Date:  2017-09       Impact factor: 7.124

Review 5.  Identifying Common Practice Elements to Improve Social, Emotional, and Behavioral Outcomes of Young Children in Early Childhood Classrooms.

Authors:  Bryce D McLeod; Kevin S Sutherland; Ruben G Martinez; Maureen A Conroy; Patricia A Snyder; Michael A Southam-Gerow
Journal:  Prev Sci       Date:  2017-02

6.  Mixed methods analysis of participant attrition in the nurse-family partnership.

Authors:  Ruth A O'Brien; Patricia Moritz; Dennis W Luckey; Maureen W McClatchey; Erin M Ingoldsby; David L Olds
Journal:  Prev Sci       Date:  2012-06

7.  The Impact of a Healthy Weight Intervention Embedded in a Home-Visiting Program on Children's Weight and Mothers' Feeding Practices.

Authors:  Alexandra B Morshed; Rachel G Tabak; Cynthia D Schwarz; Debra Haire-Joshu
Journal:  J Nutr Educ Behav       Date:  2018-10-30       Impact factor: 3.045

8.  The Role of Culture in Effective Intervention Design, Implementation, and Research: Its Universal Importance.

Authors:  Spero M Manson
Journal:  Prev Sci       Date:  2020-01

Review 9.  Effectiveness of Family-Based Behavior Change Interventions on Obesity-Related Behavior Change in Children: A Realist Synthesis.

Authors:  Gemma Enright; Margaret Allman-Farinelli; Julie Redfern
Journal:  Int J Environ Res Public Health       Date:  2020-06-08       Impact factor: 3.390

10.  Delphi Technique in Health Sciences: A Map.

Authors:  Marlen Niederberger; Julia Spranger
Journal:  Front Public Health       Date:  2020-09-22
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.