| Literature DB >> 31823751 |
Chelsea S Lutz1,2,3, Mimi P Huynh4, Monica Schroeder4, Sophia Anyatonwu5, F Scott Dahlgren6, Gregory Danyluk7, Danielle Fernandez8, Sharon K Greene9, Nodar Kipshidze10, Leann Liu11, Osaro Mgbere12, Lisa A McHugh13, Jennifer F Myers14, Alan Siniscalchi15, Amy D Sullivan16, Nicole West17, Michael A Johansson18, Matthew Biggerstaff6.
Abstract
BACKGROUND: Infectious disease forecasting aims to predict characteristics of both seasonal epidemics and future pandemics. Accurate and timely infectious disease forecasts could aid public health responses by informing key preparation and mitigation efforts. MAIN BODY: For forecasts to be fully integrated into public health decision-making, federal, state, and local officials must understand how forecasts were made, how to interpret forecasts, and how well the forecasts have performed in the past. Since the 2013-14 influenza season, the Influenza Division at the Centers for Disease Control and Prevention (CDC) has hosted collaborative challenges to forecast the timing, intensity, and short-term trajectory of influenza-like illness in the United States. Additional efforts to advance forecasting science have included influenza initiatives focused on state-level and hospitalization forecasts, as well as other infectious diseases. Using CDC influenza forecasting challenges as an example, this paper provides an overview of infectious disease forecasting; applications of forecasting to public health; and current work to develop best practices for forecast methodology, applications, and communication.Entities:
Keywords: Decision making; Disease outbreaks; Emergency preparedness; Forecast; Infectious disease; Influenza; Pandemic
Mesh:
Year: 2019 PMID: 31823751 PMCID: PMC6902553 DOI: 10.1186/s12889-019-7966-8
Source DB: PubMed Journal: BMC Public Health ISSN: 1471-2458 Impact factor: 3.295
Summary of Completed and Planned EPI Forecasting Challenge Designs as of August 2019
| Challenge Name | Health Outcome of Interest | Year(s) | Target(s) |
|---|---|---|---|
| Predict the Influenza Season Challenge | ILI in the United States at the national/regional level | 2013–14 | Season onset, peak week, peak intensity, season duration |
| FluSight 2014–15 | ILI in the United States at the national/regional level | 2014–15 | Season onset, peak week, peak intensity, weekly ILI percent 1–4 weeks ahead |
| Dengue Forecasting Project | Dengue cases in Iquitos, Peru and San Juan, Puerto Rico | 2015 | Timing of peak incidence, maximum weekly incidence, total number of cases in a transmission season |
| FluSight 2015–16 | ILI in the United States at the national/regional level | 2015–16 | Season onset, peak week, peak intensity, weekly ILI percent 1–4 weeks ahead |
| FluSight 2016–17 | ILI in the United States at the national/regional level | 2016–17 | Season onset, peak week, peak intensity, weekly ILI percent 1–4 weeks ahead |
| FluSight 2017–18 | ILI in the United States at the national/regional level | 2017–18 | Season onset, peak week, peak intensity, weekly ILI percent 1–4 weeks ahead |
| State FluSight 2017–18 | ILI in the United States at the state/territory level | 2017–18 | Peak week, peak intensity, weekly ILI percent 1–4 weeks ahead |
| Influenza Hospitalizations 2017–18 | Influenza hospitalizations in the United States | 2017–18 | Peak week, peek weekly hospitalization rate, weekly hospitalization rates 1–4 weeks ahead |
| FluSight 2018–19 | ILI in the United States at the national/regional level | 2018–19 | Season onset, peak week, peak intensity, weekly ILI percent 1–4 weeks ahead |
| State FluSight 2018–19 | ILI in the United States at the state/territory level | 2018–19 | Peak week, peak intensity, weekly ILI percent 1–4 weeks ahead |
| Influenza Hospitalizations 2018–19 | Influenza hospitalizations in the United States | 2018–19 | Peak week, peek weekly hospitalization rate, weekly hospitalization rates 1–4 weeks ahead |
| 2019 | Monthly presence of | ||
| FluSight 2019–20 | ILI in the United States at the national/regional level | 2019–20 (future) | Season onset, peak week, peak intensity, weekly ILI percent 1–4 weeks ahead |
| State FluSight 2019–20 | ILI in the United States at the state/territory level | 2019–20 (future) | Peak week, peak intensity, weekly ILI percent 1–4 weeks ahead |
| Influenza Hospitalizations 2019–20 | Influenza hospitalizations in the United States | 2019–20 (future) | Peak week, peek weekly hospitalization rate, weekly hospitalization rates 1–4 weeks ahead |
Major modeling approaches used to generate influenza outbreak forecasts*
| Approach | Description | Strengths | Limitations |
|---|---|---|---|
| Agent-based models | These are computational systems in which persons are treated as individual agents that can interact with other agents and their environment based on specific rules. | These models have been used to address questions relating to the impact of control measures and changes in individual behavior during an outbreak. They allow for interactions between individuals and between individuals and their environments, and can therefore enable the forecasting of influenza dynamics under different intervention and resource allocation scenarios. | One difficulty in applying these models is the assumptions under which they operate, compounded by our limitations in understanding human behavior and contact networks. They are also computationally challenging and often require supercomputers. |
| Compartmental models | These models divide the population into compartments based on disease states and define rates at which individuals move between compartments. Examples include susceptible–infectious–recovered (SIR) and susceptible–exposed–infectious–recovered (SEIR) models. | Compartmental models are attractive due to their simplicity and well-studied behavior. These models are typically extended by defining multiple compartments to introduce subpopulations, or used in combination with other approaches, such as particle filtering, for influenza forecasting [ | The usual fully mixed, homogenous population assumption fails to capture the differences in contact patterns for different age groups and environments. |
| Ensemble models | Ensemble modeling is the process of running two or more models and synthesizing the results into a single forecast with the intent of improving the accuracy. The individual models may be nearly identical to each other or may differ greatly. | Ensemble models typically predict future observations better than a single model. Individual models in the ensemble can be weighted using recent or historical performance, or using a more complex algorithm. | The choice of which forecasts to include and how to weight the individual forecasts in the final ensemble may vary and is not standardized for infectious disease forecasting. |
| Metapopulation models | In between agent-based and compartmental models, populations are represented in structured and separated discrete patches and subpopulations interact through movement. Epidemic dynamics can be described within patches using clearly defined disease states such as in compartmental models. | The detailed mobility networks used in some of these models can enable reliable description of the diffusion pattern of an ongoing epidemic. These models have also been used to evaluate the effectiveness of various measures for controlling influenza epidemics. | Similar to agent-based models, empirical measurement or assumptions concerning interactions and movement is challenging. |
| Method of analogs | The method of analogs is a nonparametric forecasting approach. Forecasting is based on matching current influenza patterns to patterns of historical outbreaks. | The onset of seasonal influenza epidemics varies from year to year in most countries in the Northern hemisphere. As the method of analogs is nonparametric, it does not require explicit assumptions about underlying distributions or seasonality. | These forecasts rely on historical data which are often limited or not available. Limitations include the difficulty in finding similar patterns from historical outbreaks. |
| Time series models | These models typically use the Box-Jenkins approach and assume that future values can be predicted based on past observations. | Can capture lagged relationships that usually exist in periodically collected data. In addition, temporal dependence can also be represented in models that are capable of capturing trend and periodic changes. | Influenza activity is not consistent from season to season, which could impose limitations to these methods. |
*Adapted from Nsoesie et al., 2014 [19]
Fig. 1The use of trade names is for identification only and does not imply endorsement by the Centers for Disease Control and Prevention and/or the Council for State and Territorial Epidemiologists
Fig. 2The Morbidity and Mortality Weekly Report (MMWR) week is the week of the epidemiologic year for which the National Notifiable Diseases Surveillance System (NNDSS) disease report is assigned by the reporting local or state health department for the purposes of disease incidence reporting and publishing [36]. Values range from 1 to 53, although most years consist of 52 weeks. The weeks shown in the figure above are for example only, as MMWR weeks and corresponding calendar date may shift year to year
Summary of results from the FluSight influenza forecast challenges*
| 2013–14 season | 2014–15 season | 2015–16 season | 2016–17 season | 2017–18 season | |
|---|---|---|---|---|---|
| Number of participating teams | 9 | 5 | 11 | 21 | 22 |
| Number of submitted forecasts† | 13 | 7 | 14 | 28 | 29 |
| Season onset top skill | N/A** | 0.41 | 0.18 | 0.78 | 0.69 |
| Peak week top skill | N/A | 0.49 | 0.20 | 0.49 | 0.50 |
| Peak intensity top skill | N/A | 0.17 | 0.66 | 0.36 | 0.26 |
| 1-week ahead top skill | N/A | 0.43 | 0.89 | 0.60 | 0.54 |
| 2-weeks ahead top skill | N/A | 0.36 | 0.76 | 0.46 | 0.37 |
| 3-weeks ahead top skill | N/A | 0.37 | 0.66 | 0.41 | 0.29 |
| 4-weeks ahead top skill | N/A | 0.35 | 0.58 | 0.38 | 0.26 |
| Overall top performing team | Columbia University | Delphi group, Carnegie Mellon University | Delphi group, Carnegie Mellon University | Delphi group, Carnegie Mellon University | Delphi group, Carnegie Mellon University |
*Skill scores for 2016–17 and 2017–18 challenges have not been published. Results from 2018 to 19 challenge are not complete as of August 2019
†Number of submitted forecasts do not include the unweighted average ensemble or historical average forecasts
**The logarithmic scoring rule used to determine forecast skill scores was not introduced until the second year of the challenge (2014–15). Skill scores for the challenge pilot (2013–14) are therefore not available
Fig. 3Predictions for national ILI percentage published for Week 52 through Week 3 (1-, 2-, 3-, and 4-weeks ahead, respectively) and associated 80% prediction interval
Glossary of terms commonly used in forecasting
| Forecasting term | Forecasting term definition |
|---|---|
| Ensemble model | A model that incorporates two or more models into a single model. |
| Epidemic Prediction Initiative | A CDC initiative launched in 2014 that aims at improving the science and usability of epidemic forecasts by facilitating open forecasting projects with specific public health objectives. |
| FluSight Challenge | A multi-participant competition that began during the 2013–14 influenza season (then called the “Predict the Influenza Season Challenge”) to forecast the timing, intensity, and short-term trajectory of the influenza season. |
| Forecast | A quantitative, probabilistic statement about an unobserved event, outcome, or trend and its surrounding uncertainty, conditional on previously observed data. |
| Forecast accuracy | A measurement of how well the forecast matched the outcome once it has been observed. There are a number of ways forecast accuracy can be measured, but CDC uses the logarithmic score. For more information regarding logarithmic score, please see the definition below. |
| Forecast calibration | An indicator of reliability in assigning probabilities. For FluSight forecasts, calibration is evaluated by assessing how often forecasts were correct. |
| Forecast confidence | A characterization of the uncertainty in a forecast. The Epidemic Prediction Initiative requires that forecast confidence be expressed as a probability (e.g., a 0.2 probability or 20% chance that the peak week of the influenza season will be on week 2). |
| Hindcast | Forecast of past conditions, also known as “pastcast.” For example, due to delays in reporting and data accrual, the FluSight forecast for ILI outpatient visits “one week ahead” is actually a forecast for the previous calendar week. |
| ILI | Influenza-like illness, fever and either a cough or sore throat. |
| ILINet | US Outpatient Influenza-like Illness Surveillance Network; a surveillance system that accrues weekly data on the number of patients with ILI and the total number of patients seen in healthcare settings, reported by outpatient healthcare providers in the United States. |
| Logarithmic score | The logarithm of the probability assigned to the observed outcome averaged across various forecasts (e.g., weeks, targets, and geographic regions). Used to measure the accuracy of a forecast. |
| Nowcast | Forecast of current conditions. For example, due to delays in reporting and data accrual, the FluSight forecast for ILI outpatient visits “two weeks ahead” is actually a forecast for the current calendar week. |
| Onset | The start of sustained disease activity. As a seasonal target for FluSight forecasts, it is defined as the first week when the percentage of visits for ILI reported through ILINet reaches or exceeds the baseline value for three consecutive weeks. No onset is a possible outcome. |
| Peak intensity | The maximum weekly or monthly value that disease activity reaches. As a seasonal target for FluSight forecasts, it is defined as the highest numeric value that the weighted ILINet percentage reaches during a season. |
| Peak week | The week that disease activity reaches it maximum. As a seasonal target for FluSight forecasts, it is defined as the week during the influenza season when the weighted ILINet percentage is the highest. More than one peak week is a possible outcome. |
| Reliability | A measure of how well the forecasted probability of an event occurring matches the observed outcome. Reliability answers the question whether a forecast that assigns a probability of 0.2 observes the forecasted event 20% of the time. This is also known as forecast calibration. |
| Retrospective forecast | A forecast of a past event (e.g., past influenza or dengue seasons) using data only from time periods prior to the event. |
| Skill | The average confidence (or probability) that was assigned to the observed outcome. |
| Seasonal target | Forecasts for the overall influenza season characteristics. These forecasts currently include the onset week, peak week, and peak intensity. |
| Short-term target | Forecasts for the near-term trajectory of the influenza season. These forecasts currently include forecasts for influenza activity one, two, three, and four weeks ahead from the date of data publication. |
| Target | The outcome that a forecast is predicting. |