| Literature DB >> 34962281 |
Theresa Jackson Santo1, Jill A Brown2, Stephanie A Q Gomez1, Lauren A Shirey1.
Abstract
Service Members and military beneficiaries face complex and ill-structured challenges, including suicide, sexual violence, increasing health care costs, and the evolving coronavirus pandemic. Military and other government practitioners must identify effective programs, policies, and initiatives to preserve the health and ensure the readiness of our Force. Both research and program evaluation are critical to identify interventions best positioned to prevent disease, protect the public's health, and promote health and well-being within our ranks to retain a medically ready force and reduce the global burden of disease. While military and medical leaders are typically well versed in research and understand the role of research in evidence-informed decisions, they may be less aware of program evaluation. Program evaluation is the systematic application of scientific methods to assess the design, implementation, improvement, or outcomes of a program, policy, or initiative. Although program evaluators commonly utilize scientific or research methods to answer evaluation questions, evaluation ultimately differs from research in its intent. Several recently published federal and Department of Defense policies specifically reference program evaluation, emphasizing its importance to the military and government as a whole. The Army is uniquely positioned to conduct medical and public health evaluation activities and there are several Army organizations and entities that routinely perform this work. For example, the United States Army Public Health Center (APHC) is among recognized military experts in public health assessment and program evaluation. Given the breadth of our work, the APHC understands the challenges to conducting evaluation studies in the Army and we have thoughtfully examined the conditions common to successful evaluation studies. In this commentary, we share our lessons learned to assist military colleagues, potential partners, and others in successfully evaluating the programs, policies, and initiatives necessary to keep our Service Members and beneficiaries healthy and ready. There are several challenges to executing evaluation studies in the Army that may be relevant across all Services. These include but are not limited to frequent Army leadership transitions, urgency to report study results, lack of program documentation and adequate planning for evaluation, expectation management to ensure stakeholders are well-informed about the evaluation process, and a disorganized data landscape. These challenges may hinder the successful execution of evaluation studies, or prevent them from being attempted in the first place, depriving Army leaders of quality, actionable information to make evidence-informed decisions. Despite the aforementioned challenges, we have identified a number of best practices to overcome these challenges and conduct successful evaluation studies. These facilitators of successful evaluations can be summarized as: collaboration with engaged stakeholders who understand the value of evaluation, evaluation studies aligned with larger strategic priorities, agile methodology, thoughtful evaluation planning, and effective communication with stakeholders. We wholeheartedly recommend and encourage program evaluation at every opportunity, and we anticipate the call for evaluation and evidence-informed decisions to continually increase. Our hope is that others - to include partners and stakeholders within and external to the military - will be able to leverage and apply this information, especially the identified best practices, in their evaluation efforts to ensure success. Published by Oxford University Press on behalf of the Association of Military Surgeons of the United States 2021. This work is written by (a) US Government employee(s) and is in the public domain in the US.Entities:
Mesh:
Year: 2022 PMID: 34962281 PMCID: PMC9383159 DOI: 10.1093/milmed/usab516
Source DB: PubMed Journal: Mil Med ISSN: 0026-4075 Impact factor: 1.563
FIGURE 1.Differences between program evaluation and research. The distinctions between evaluation and research exist in the methods and analysis steps, as well as in the focus, goal, setting, and values related to each.[4–6]
Considerations for Program Evaluation in the Military with Example Resources and References
| Consideration | Example resources and references relevant to consideration |
|---|---|
| Consideration 1. | Resource 1. |
| Military missions are largely driven by regulatory authority. Understanding the regulatory authority that exists to support the evaluation of whatever is being assessed is crucial. It is critical to be aware of these regulatory authorities and review and approval processes prior to commencing any evaluation study. | Public Law No: 115-435, Foundations for Evidence-Based Policymaking Act of 2018 (January 14, 2019) calls for each federal agency to “designate a senior employee as Evaluation Officer to coordinate evidence-building activities and…to advise on statistical policy, techniques, and procedures.” |
| Consideration 2. | Resource 2(a). |
| There has been an increased focus on evaluation in the last 5-10 years in the military, often in the context of making resource decisions. This can be advantageous in that senior leaders are now asking for evaluations to be conducted. Additionally, program, policy, or initiative stakeholders are seeking out evaluation to show evidence of effectiveness to garner continued or increased resourcing and to increasingly demonstrate a commitment to program improvement and accountability. It can be challenging in that leaders may want to make decisions using limited information and in that program, policy, or initiative stakeholders may be resistant to being evaluated for the fear of losing resources if results are unfavorable. | DoD Instruction 1342.22 (Military Family Readiness, April 11, 2017) states, “The impact of family readiness services shall be measured through program evaluation that uses valid and reliable outcome, customer satisfaction, cost, and process measures that are linked to specific and measurable performance goals… to inform decisions regarding sustainment, modification or termination of family readiness services” (p. 27). |
| Consideration 3. | Resource 3(a). |
| There are important authorities and approval processes for collecting primary evaluation data from service members. Several regulatory authorities and approval processes define the types of information that can be collected from military audiences (i.e., active duty service members, reservists and Guardsmen, women, and family members). Any entity collecting data needs to recognize and mitigate potential human subjects concerns (e.g., over-surveying and undue command influence). | To ensure the activity is committed to protecting the rights and welfare of all participants, refer to the ethical principles established by the Belmont Report and all legal requirements established by Title 32 Code of Federal Regulations (CFR) Part 219.102 and the DoD Instruction 3216.02. |
| Consideration 4. | Resource 4(a). |
| Not all work is publicly accessible and there are parameters on what can be communicated, to whom, and in what sequence. Some evaluation work is considered privileged information or sensitive in nature and may not be available through publicly released reports or papers. There may be strict guidance relevant to who receives what information, as well as how and when they receive it. When possible, agreements for review and release of results should be made upfront, so valuable lessons learned can be shared with other military organizations, at a minimum. | The DoD asks contributors to use a document classification system outlined in DoD Instruction 5230.24, Distribution Statements on Technical Documents, dated August 23, 2012, to indicate how broadly documents should be distributed based on defined criteria. |
| Consideration 5. | Resource 5(a). |
| It is critical to understand and be able to operate within the military environment when executing evaluation studies. The military, and each branch of service, has a culture of its own with established norms, language, culture, and traditions. Trying to execute an evaluation study as one would in a civilian or academic setting will not only make the evaluation team appear culturally incompetent but also will be an exercise in futility. Leveraging chains of command, understanding the “mission first” mentality, and knowing and using military terms relevant to evaluation (e.g., MOPs—measures of performance; MOEs—measures of effectiveness; AARs—After-Action Reviews) are vital to evaluation success. | The Substance Abuse and Mental Health Services Administration published |