| Literature DB >> 31601233 |
Michael A Stoto1,2, Christopher Nelson3, Rachael Piltch-Loeb4,5, Landry Ndriko Mayigane6, Frederik Copper6, Stella Chungong6.
Abstract
BACKGROUND: After Action Reviews (AARs) provide a means to observe how well preparedness systems perform in real world conditions and can help to identify - and address - gaps in national and global public health emergency preparedness (PHEP) systems. WHO has recently published guidance for voluntary AARs. This analysis builds on this guidance by reviewing evidence on the effectiveness of AARs as tools for system improvement and by summarizing some key lessons about ensuring that AARs result in meaningful learning from experience.Entities:
Keywords: After action reports (AARs); After action reviews (AARs); Critical incident reviews; Public health emergencies; Public health preparedness; Systems improvement
Mesh:
Year: 2019 PMID: 31601233 PMCID: PMC6785939 DOI: 10.1186/s12992-019-0500-z
Source DB: PubMed Journal: Global Health ISSN: 1744-8603 Impact factor: 4.185
Response capabilities [32]
| Detection and assessment | |
| • Surveillance & epidemiological monitoring | |
| • Incident recognition | |
| • Risk characterization | |
| • Laboratory analysis | |
| • Epidemiological investigation | |
| • Environmental monitoring | |
| Policy development, adaptation, and implementation | |
| • For infection control and treatment guidance | |
| • For population-based disease control | |
| • Communicating between national and subnational authorities and enforcing laws and regulations | |
| Health services | |
| • Preventive services | |
| • Medical surge | |
| • Management of medical countermeasures, supplies & equipment | |
| • Medical services for health care workers & emergency responders | |
| Coordination and communication | |
| • Crisis management | |
| • Communication with healthcare providers | |
| • Communication with emergency management, public safety, and other sectors | |
| • Communication with other public health agencies at the global, European, national, and subnational levels | |
| Emergency risk communication | |
| • Address communication inequalities | |
| • Generate dynamic listening and manage rumors | |
| • Communicate risk in an accurate, transparent and timely manner | |
| • Generate and maintain trust |
Ebola virus in Dallas and New York City
Although global public health systems had been slow to respond to the first cases in West Africa earlier in the year, by September Ebola stories were prominent in the U.S. media and professional publications. In addition, the Centers for Disease Control and Prevention (CDC), state, and local health departments throughout the country alerted hospitals, which in turn distributed this information to first line providers. On Thursday, September 25, a Liberian resident (Mr. D) visiting relatives in Dallas, Texas developed symptoms consistent with Ebola and sought care at the Texas Health Presbyterian Hospital emergency department (ED). Despite telling one of the nurses that he was from Liberia, he was sent home. On Sunday, September 28, Mr. D returned to the same hospital by ambulance, with more severe symptoms. This time he was consider a potential Ebola case and was “isolated in the ED.” Samples were not sent for testing to CDC and the Texas Department of State Health Services until Monday, and positive results were received on Tuesday, September 30, at which point a public health response was initiated. During this 4-day period, two nurses were infected with Ebola. Mr. D died on September X, and the nurses survived. On Wednesday, October 15, Dr. S, a physician who had been treating Ebola patients in Guinea with Médecins Sans Frontières (MSF) returned home to New York City and in the following days travelled throughout the city using public transportation. On Thursday, October 23, following MSF protocols, took his own temperature and reported a low-grade fever. A few hours later he was taken by a special ambulance to an isolation ward that had been prepared Bellevue Hospital Center. Two of Dr. S’s friends were quarantined, and by that evening the Mayor, the New York City health commissioner, and others held a press conference outlining the public health response. Dr. S was treated and survived, and there were no additional cases. It is clearly inappropriate to directly compare the two cases – an uninsured traveler from Liberia and a physician trained by MSF – and the first case is always more difficult. One can, however, examine each system’s response. Although problems with the EHR may have contributed to the failure to diagnose Mr. D’s case the first time he came to the hospital in Dallas [ |
Root Cause Analysis steps and example
1. Define the story arc by summarizing the context and pivotal nodes (events, decisions, time points) when events could have unfolded differently and could have led to a substantially different outcome. 2. Identify the public health system’s major organizational goals or objectives in responding to the incident, including which PHEP Capabilities and IHR (2005) core capacities that were stressed. 3. Identify the major response challenges that had a qualitative impact on permitting achievement of the public health system’s goals or at least had the potential to do so. 4. Define the immediate causes of the challenges and the factors that contributed to the challenges, whether modifiable (within the jurisdiction’s influence) or not modifiable (not within the jurisdiction’s influence); note pre-event decisions and factors beyond the system’s control. 5. Identify factors that, if not addressed, are likely to limit the public health system in future incidents. With these steps in mind, RCA can help those conducting the AAR to include the deepest level of analysis within their review.
|
Ensuring Rigor in Case Study and Qualitative Data Collection and Analysis [45, 46]
| • Prolonged engagement with the subject of inquiry. Health policy and systems research tends to draw on lengthy and perhaps repeated interviews with respondents and/or days and weeks of engagement at a case study site. | |
| • Use of theory. Theory is essential to guide sample selection, data collection, analysis, and interpretive analysis. | |
| • Case selection. Purposive selection allows earlier theory and initial assumptions to be tested and permits an examination of “average” or unusual experience. | |
| • Sampling. It is essential to consider possible factors that might influence the behavior of the people in the sample and ensure that the initial sample draws extensively across people, places, and time. Researchers need to gather views from a wide range of perspectives and respondents and not allow one viewpoint to dominate. | |
| • Multiple methods. For each case study site, best practice calls for carrying out two sets of formal interviews with all sampled staff, patients, facility supervisors, and area managers and conducting observations and informal discussions. | |
| • Triangulation. Patterns of convergence and divergence may emerge by comparing results with theory in terms of sources of evidence (e.g., across interviewees and between interview and other data), various researchers’ strategies, and methodological approaches. | |
| • Negative case analysis. It is advisable to search for evidence that contradicts explanations and theory and then refine the analysis accordingly. | |
| • Peer debriefing and support. Other researchers should be involved in a review of findings and reports. | |
| • Respondent validation. Respondents should review all findings and reports. | |
| • Clear report of methods of data collection and analysis (audit trail). A full record of activities provides others with a complete account of how methods evolved. |