Literature DB >> 35915104

The United States COVID-19 Forecast Hub dataset.

Estee Y Cramer1, Yuxin Huang1, Yijin Wang1, Evan L Ray1, Matthew Cornell1, Johannes Bracher2,3, Andrea Brennen4, Alvaro J Castro Rivadeneira1, Aaron Gerding1, Katie House1, Dasuni Jayawardena1, Abdul Hannan Kanji1, Ayush Khandelwal1, Khoa Le1, Vidhi Mody1, Vrushti Mody1, Jarad Niemi5, Ariane Stark1, Apurv Shah1, Nutcha Wattanchit1, Martha W Zorn1, Nicholas G Reich6.   

Abstract

Academic researchers, government agencies, industry groups, and individuals have produced forecasts at an unprecedented scale during the COVID-19 pandemic. To leverage these forecasts, the United States Centers for Disease Control and Prevention (CDC) partnered with an academic research lab at the University of Massachusetts Amherst to create the US COVID-19 Forecast Hub. Launched in April 2020, the Forecast Hub is a dataset with point and probabilistic forecasts of incident cases, incident hospitalizations, incident deaths, and cumulative deaths due to COVID-19 at county, state, and national, levels in the United States. Included forecasts represent a variety of modeling approaches, data sources, and assumptions regarding the spread of COVID-19. The goal of this dataset is to establish a standardized and comparable set of short-term forecasts from modeling teams. These data can be used to develop ensemble models, communicate forecasts to the public, create visualizations, compare models, and inform policies regarding COVID-19 mitigation. These open-source data are available via download from GitHub, through an online API, and through R packages.
© 2022. The Author(s).

Entities:  

Mesh:

Year:  2022        PMID: 35915104      PMCID: PMC9342845          DOI: 10.1038/s41597-022-01517-w

Source DB:  PubMed          Journal:  Sci Data        ISSN: 2052-4463            Impact factor:   8.501


Introduction

To understand how the COVID-19 pandemic would progress in the United States, dozens of academic research groups, government agencies, industry groups, and individuals produced probabilistic forecasts for COVID-19 outcomes starting in March 2020[1]. We collected forecasts from over 90 modeling teams in a data repository, thus making forecasts easily accessible for COVID-19 response efforts and forecast evaluation. The data repository is called the US COVID-19 Forecast Hub (hereafter, Forecast Hub) and was created through a partnership between the United States Centers for Disease Control and Prevention (CDC) and an academic research lab at the University of Massachusetts Amherst. The Forecast Hub was launched in early April 2020 and contains real-time forecasts of reported COVID-19 cases, hospitalizations, and deaths. As of May 3rd, 2022, the Forecast Hub had collected over 92 million individual point or quantile predictions contained within over 6,600 submitted forecast files from 110 unique models. The forecasts submitted each week reflected a variety of forecasting approaches, data sources, and underlying assumptions. There were no restrictions in place regarding the underlying information or code used to generate real-time forecasts. Each week, the latest forecasts were combined into an ensemble forecast (Fig. 1), and all recent forecast data were updated on an official COVID-19 Forecasting page hosted by the US CDC (https://www.cdc.gov/coronavirus/2019-ncov/science/forecasting/mathematical-modeling.html). The ensemble models were also used in the weekly reports that are posted on the Forecast Hub website, https://covid19forecasthub.org/doc/reports/.
Fig. 1

Time series of weekly incident deaths at the national level and forecasts from the COVID-19 Forecast Hub ensemble model for selected weeks in 2020 and 2021. Ensemble forecasts (blue) with 50%, 80% and 95% prediction intervals shown in shaded regions and the ground-truth data (black) for incident cases (A), incident hospitalizations (B), incident deaths (C) and cumulative deaths (D). The truth data come from JHU CSSE (panels A, C, D) and HealthData.gov (panel B).

Time series of weekly incident deaths at the national level and forecasts from the COVID-19 Forecast Hub ensemble model for selected weeks in 2020 and 2021. Ensemble forecasts (blue) with 50%, 80% and 95% prediction intervals shown in shaded regions and the ground-truth data (black) for incident cases (A), incident hospitalizations (B), incident deaths (C) and cumulative deaths (D). The truth data come from JHU CSSE (panels A, C, D) and HealthData.gov (panel B). Forecasts are quantitative predictions about data that will be observed at a future time. Forecasts differ from scenario-based projections, which examine feasible outcomes conditional on a variety of future assumptions. Because forecasts are unconditional estimates of data that will be observed in the future, they can be evaluated against eventual observed data. An important feature of the Forecast Hub is that submitted forecasts are time-stamped so the exact time at which a forecast was made public can be verified. In this way, the Forecast Hub serves as a public, independent registration system for these forecast model outputs. Data from the Forecast Hub have served as the basis for research articles for forecast evaluation[2] and forecast combination[3-5]. These studies can be used to determine how well models have performed at various points during the pandemic, which can, in turn, guide best practices for utilizing forecasts in practice and inform future forecasting efforts[2]. Teams submitted predictions in a structured format to facilitate data validation, storage, and analysis. Teams also submitted a metadata file and license for their model’s data. Forecast data, ground truth data from the Johns Hopkins University Center for Systems Science and Engineering (JHU CSSE)[6], New York Times (NYTimes)[7], USA Facts[8], and HealthData.gov[9] and model metadata were stored in the public Forecast Hub GitHub repository[10]. The forecasts were automatically synchronized with an online database called Zoltar via calls to a representational State Transfer (REST) application programming interface (API)[11] every six hours (Fig. 2). Raw forecast data may be downloaded directly from GitHub or Zoltar via the covidHubUtils R package[12], the zoltr R package[13] or zoltpy Python library[14].
Fig. 2

Schematic of the data storage and related infrastructure surrounding the COVID-19 Forecast Hub. (A) Forecasts are submitted to the COVID-19 Forecast Hub GitHub repository and undergo data format validation before being accepted into the system. (B) A continuous integration service ensures that the GitHub repository and PostgreSQL database stay in sync with mirrored versions of the data. (C) Truth data for visualization, evaluation, and ensemble building are retrieved once per week using both the covidHubUtils and the covidData R packages. Truth data are stored in both repositories. (D) Once per week, an ensemble forecast submission is made using the covidEnsembles R package. It is submitted to the GitHub repository and undergoes the same validation as other submissions. (E) Using the covidHubUtils R package, forecast and truth data may be extracted from either the GitHub or PostgreSQL database in a standard format for tasks such as scoring or plotting.

Schematic of the data storage and related infrastructure surrounding the COVID-19 Forecast Hub. (A) Forecasts are submitted to the COVID-19 Forecast Hub GitHub repository and undergo data format validation before being accepted into the system. (B) A continuous integration service ensures that the GitHub repository and PostgreSQL database stay in sync with mirrored versions of the data. (C) Truth data for visualization, evaluation, and ensemble building are retrieved once per week using both the covidHubUtils and the covidData R packages. Truth data are stored in both repositories. (D) Once per week, an ensemble forecast submission is made using the covidEnsembles R package. It is submitted to the GitHub repository and undergoes the same validation as other submissions. (E) Using the covidHubUtils R package, forecast and truth data may be extracted from either the GitHub or PostgreSQL database in a standard format for tasks such as scoring or plotting. This dataset of real-time forecasts created during the COVID-19 pandemic can provide insights into the shortcomings and successes of predictions and improve forecasting efforts in years to come. Although these data are restricted to forecasts for COVID-19 in the United States, the structure of this dataset has been used to create datasets of COVID-19 forecasts in the EU and the UK, and longer-term scenario projections in the US[15-18]. The general structure of this data collection could be applied to additional diseases or forecasting outcomes in the future[11]. This large collaborative effort has provided data on short-term forecasts for over two years of forecasting efforts. Nearly all data were collected in real time and therefore are not subject to retrospective biases. The data are also openly available to the public, thus fostering a transparent, open science approach to support public health efforts.

Results

Data acquisition

Beginning in April 2020, the Reich Lab at the University of Massachusetts, Amherst, in partnership with the US CDC, began collecting probabilistic forecasts of key COVID-19 outcomes in the United States (Table 1). The effort began by collecting forecasts of deaths and hospitalizations at the weekly and daily scales for the 50 US states and 5 territories (Washington DC, Puerto Rico, US Virgin Islands, Guam, and the Northern Mariana Islands) as well as the aggregated US national level. In July 2020, daily resolution-level forecasts for COVID-19 deaths were discontinued, and the effort expanded to include forecasts of weekly incident cases at the county, state, and national levels. Forecasts may include a point prediction and/or quantiles of a predictive distribution.
Table 1

Forecast characteristics for all four outcomes.

OutcomeScaleLocationsHorizons StoredNumber of quantiles for probabilistic forecastsEarliest Forecast DateFirst date of standardized truth dataDate of first ensemble forecast
CountyStateNational
Incident CasesWeeklyXXX1 - 8 weeks72020-07-052020-03-152020-07-18
Incident HospitalizationsDailyXX1 - 130 days232020-03-272020-11-162020-12-05
Incident DeathsDailyXX1 - 130 days232020-03-152020-03-15NA
Incident DeathsWeeklyXX1-20 weeks232020-03-152020-03-152020-06-20
Cumulative DeathsDailyXX1 - 130 days232020-03-152020-03-15NA
Cumulative DeathsWeeklyXX1-20 weeks232020-03-152020-03-152020-04-13

The table shows the temporal scale, spatial scale of locations, horizons stored, number of quantiles, and dates of the earliest forecast, earliest standardized truth data, and earliest ensemble build.

Forecast characteristics for all four outcomes. The table shows the temporal scale, spatial scale of locations, horizons stored, number of quantiles, and dates of the earliest forecast, earliest standardized truth data, and earliest ensemble build. Any team was eligible to submit data to the Forecast Hub provided they used the correct formatting. Upon initial submission of forecast data, teams were required to upload a metadata file that briefly described the methods used to create the forecasts and specified a license under which their forecast data were released. Individual model outputs are available under different licenses as specified in the GitHub data repository. No model code was stored in the Forecast Hub. During the first month of operation, members of the Forecast Hub team downloaded forecasts made available by teams publicly online, transformed these forecasts into the correct format (see Forecast format section), and pushed them into the Forecast Hub repository. Starting in May 2020, all teams were required to format and submit their own forecasts.

Repository structure

The dataset containing forecasts is stored in two locations, and all data can be accessed through either source. The first is the COVID-19 Forecast Hub GitHub repository, https://github.com/reichlab/covid19-forecast-hub, and the second is an online database, Zoltar, which can be accessed via a REST API[11]. Details about data access and format are documented in the subsequent sections. When accessing data through the Zoltar forecast repository REST API, subsets of submitted forecasts can be queried directly from a PostgreSQL database. This eliminates the need to access individual CSV files and facilitates access to versions of forecasts in cases when they were updated.

Forecast outcomes

The Forecast Hub dataset stores forecasts for four different outcomes: incident cases, incident hospitalizations, incident deaths, and cumulative deaths (Table 1). Incident case forecasts were first introduced as a forecast outcome several months after the Forecast Hub started and have several key differences from other predicted outcomes. They are the only outcomes for which the Forecast Hub accepts county-level forecasts, as well as state and national level forecasts. Since there are over 3,000 counties in the US, this required some compromises on the scale of data collected for these forecasts in other ways. Specifically, case forecasts may only be submitted for up to 8 weeks into the future instead of up to 20 weeks for deaths and are required to have fewer quantiles (seven quantiles) compared to other outcomes, which can have up to twenty-three quantiles. This gives a coarser representation of the forecast (see the section on Forecast format below).

Forecast target dates

Weekly targets follow the standard of epidemiological weeks (EW) used by the CDC, which defines a week as starting on Sunday and ending on the following Saturday[19]. Forecasts of cumulative deaths target the number of cumulative deaths reported by Saturday ending a given week. Forecasts of weekly incident cases or deaths target the difference between reported cumulative cases or deaths on consecutive Saturdays. As an example of a forecast and the corresponding observation, forecasts submitted between Tuesday, October 6, 2020 (day 3 of EW41) and Monday, October 12, 2020 (day 2 of EW42) contained a “1 week ahead” forecast of incident deaths that corresponded to the change in cumulative reported deaths observed in EW42 (i.e., the difference between the cumulative reported deaths on Saturday, October 17, 2020, and Saturday, October 10, 2020), a “2 week ahead” forecast that corresponded to the change in cumulative reported deaths in week EW43. In this paper, we refer to the “forecast week” of a submitted forecast as the week corresponding to a “0-week ahead” horizon. In the example above, the forecast week would be EW41. Daily incident hospitalization horizons are for the number of reported hospitalizations a specified number of days after the forecast was generated.

Summary of forecast data collected

In the initial weeks of submission, fewer than 10 models provided forecasts. As the pandemic spread, the number of teams submitting forecasts increased; as of May 3rd, 2022, 93 primary, 9 secondary models, and 17 models with the designation “other” had been submitted to the Forecast Hub. As of May 3rd, 2022, across all weeks, a median of 30 primary models (range: 14 to 39) contributed incident case forecasts (Fig. 3a), a median of 11 primary models (range: 1 to 16) contributed incident hospitalizations (Fig. 3b), a median of 37 primary models (range 1 to 49) contributed incident death forecasts (Fig. 3c), and a median of 35 primary models (range 3 to 46) contributed cumulative death forecasts each week (Fig. 3d). As of May 3rd, 2022, the dataset contained 6,633 forecast files with 92,426,015 point or quantile predictions for unique combinations of targets and locations.
Fig. 3

Number of primary forecasts submitted for each outcome per week from April 27th, 2020 through May 3rd, 2022. In the initial weeks of submission, fewer than 10 models provided forecasts. Over time, the number of teams submitting forecasts for each forecasted outcome increased into early 2021 and then saw a small decline through the end of 2021, with some renewed interest in 2022.

Number of primary forecasts submitted for each outcome per week from April 27th, 2020 through May 3rd, 2022. In the initial weeks of submission, fewer than 10 models provided forecasts. Over time, the number of teams submitting forecasts for each forecasted outcome increased into early 2021 and then saw a small decline through the end of 2021, with some renewed interest in 2022.

Ensemble and baseline forecasts

Alongside the models submitted by individual teams, there are also baseline and ensemble models generated by the Forecast Hub and CDC. The COVIDhub-baseline model was created by the Forecast Hub in May 2020 as a benchmarking model. Its point forecast is the most recent observed value as of the forecast creation date with a probability distribution around that based on weekly differences in previous observations[2]. The baseline model initially produced forecasts for case and death outcomes. Hospitalization baseline forecasts were added in September 2021. The COVIDhub-ensemble model creates a combination of submitted forecasts to the Forecast Hub. The ensemble produces forecasts of incident cases at a horizon of 1 week ahead, forecasts of incident hospitalizations at horizons up to 14 days ahead, and forecasts of incident and cumulative deaths at horizons up to 4 weeks ahead. Initially the ensemble produced forecasts of incident cases at horizons of 1 to 4 weeks and incident hospitalizations at 1 to 28 days. However, in September 2021, due to the unreliability of incident case and hospitalization forecasts at horizons greater than 1 week (for cases) and 14 days (for hospitalizations), horizons past those respective thresholds were excluded from the COVIDhub-ensemble model, although they were still included in the COVIDhub-4_week_ensemble[20]. Other work details the methods used for determining the appropriate combination approach[3,4]. Starting in February 2021, GitHub tags were created to document the exact version of the repository used each week to create the COVIDhub-ensemble forecast. This creates an auditable trail in the repository so the correct version of the forecasts used could be recovered even in cases when some forecasts were subsequently updated. The Forecast Hub also collaborates with the CDC on the production of three additional ensemble forecasts each week. These are the COVIDhub-4_week_ensemble, COVIDhub-trained_ensemble, and the COVIDhub_CDC-ensemble. The COVIDhub-4_week_ensemble produces forecasts of incident cases, incident deaths, and cumulative deaths at horizons of 1 through 4 weeks ahead, and forecasts of incident hospitalizations at horizons of 1 through 28 days ahead and uses the equally-weighted median of all component forecasts at each location, forecast horizon, and quantile level. The COVIDhub-trained_ensemble uses the same targets as the COVIDhub-4_week_ensemble but computes the models as a weighted median of the ten component forecasts with the best performance as measured by their weighted interval score (WIS) in the 12 weeks prior to the forecast date. The COVIDhub_CDC-ensemble pulls forecasts of cases and hospitalizations from the COVIDhub-4_week_ensemble and forecasts of deaths from the COVIDhub-trained_ensemble. The set of horizons that are included is updated regularly using rules developed by the CDC based on recent forecast performance. Several other models are also combinations of some or all models submitted to the Forecast Hub. As of May 3rd, 2022, these models are FDANIHASU-Sweight, JHUAPL-SLPHospEns, and KITmetricslab-select_ensemble. These models are flagged in the metadata using the Boolean metadata field, “ensemble_of_hub_models”.

Use scenarios

R package covidHubUtils

We have developed the covidHubUtils R package at https://github.com/reichlab/covidHubUtils to facilitate bulk retrieval of forecasts for analysis and evaluation. Examples of how to use the covidHubUtils package and its functions can be found at https://reichlab.io/covidHubUtils/. The package supports loading forecasts from a local clone of the GitHub repository or by querying data from Zoltar. The package supports common actions for working with the data, such as loading specific subsets of forecasts, plotting forecasts, scoring forecasts, retrieving ground truth data, and many other utility functions to simplify working with the data.

Visualization of forecasts in the COVID-19 Forecast Hub

In addition to viewing forecasts in an R package, forecasts can also be viewed through our public website, https://viz.covid19forecasthub.org/. Through this tool, viewers can select the outcome, location, prediction interval, issue date of the truth data, and the models of interest to view forecasts. This tool can be used to see forecasts for the upcoming weeks, qualitatively evaluate model performance in past weeks, or visualize past performance based on available data at the time of forecasting (Fig. 4).
Fig. 4

Visualization tool updated weekly by the US COVID-19 Forecast Hub displays model forecasts and truth data at selected forecast dates, locations, forecast outcomes and PI levels. US national level incident death forecasts from 39 models are shown with point values and a 50% PI. These forecasts are for 1 through 4 week ahead horizons. Data used for forecasting were generated on July 24th, 2021. The visualization tool is available at: https://viz.covid19forecasthub.org.

Visualization tool updated weekly by the US COVID-19 Forecast Hub displays model forecasts and truth data at selected forecast dates, locations, forecast outcomes and PI levels. US national level incident death forecasts from 39 models are shown with point values and a 50% PI. These forecasts are for 1 through 4 week ahead horizons. Data used for forecasting were generated on July 24th, 2021. The visualization tool is available at: https://viz.covid19forecasthub.org.

Communicating results from the COVID-19 Forecast Hub

Communication of probabilistic forecasts to the public is challenging[21,22], and the best practices regarding the communication of outbreaks are still developing[23]. Starting in April 2020, the CDC published weekly summaries of these forecasts on their public website[24], and these forecasts were occasionally used in public briefings by the CDC Director[25]. Additional examples of the communication of Forecast Hub data can be viewed through weekly reports generated by the Forecast Hub team for dissemination to the general public, including state and local departments of health(https://covid19forecasthub.org/doc/reports/). On December 22nd, 2021, the CDC ceased communication of case forecasts due to low reliability of these forecasts (https://www.cdc.gov/coronavirus/2019-ncov/science/forecasting/forecasts-cases.html).

Discussion

We present here the US COVID-19 Forecast Hub, a data repository that stores structured forecasts of COVID-19 cases, hospitalizations, and deaths in the United States. The Forecast Hub is an important asset for visualizing, evaluating, and generating aggregate forecasts. It also demonstrates the highly collaborative effort that has gone into COVID-19 modeling efforts. This open-source data repository is beneficial for researchers, modelers, and casual viewers interested in forecasts of COVID-19. The website was viewed over half a million times in the first two years of the pandemic. The US COVID-19 Forecast Hub is a unique, large-scale, collaborative infectious disease modeling effort. The Forecast Hub emerged from years of collaborative modeling efforts that started as government sponsored forecasting “challenges”. These collaborations are distinct from modeling efforts of individual teams, as the Forecast Hub has created open collaborative systems that facilitate model collection, curation, comparison, and combination, often in direct collaboration with governmental public health agencies[26-28]. The Forecast Hub built on these past efforts by developing a new quantile-based data format as well as automated data submission and validation procedures. Additionally, the scale of the collaborative effort for the US COVID-19 Forecast Hub has exceeded prior COVID-19 forecasting efforts by an order of magnitude in terms of the number of participating teams and forecasts collected. Finally, the infrastructure developed for the US COVID-19 Forecast Hub has been adapted for use by a number of other modeling hubs, including the US COVID-19 Scenario Modeling Hub[17], the European COVID-19 Forecast Hub[15], the German/Polish COVID-19 Forecasting Hub[16], the German COVID-19 Hospitalization Nowcasting Hub[29], and the 2022 US CDC Influenza Hospitalization Forecasting challenge[30]. The Forecast Hub has played a critical role in collecting forecasts in a single format from over 100 different prediction models and making these data available to a wide variety of stakeholders during the COVID-19 pandemic. While some of these teams register their forecasts in other publicly available locations, many teams do not. Thus the Forecast Hub is the only location where many teams’ forecasts are available. In addition to curating data from other models, the Forecast Hub has also played a central role in synthesizing the outputs of models together. The Forecast Hub has generated an ensemble forecast, which has been used in official communications by the CDC, every week since April 2020. The ensemble model for incident deaths, a median aggregate of all other eligible models, was consistently the most accurate model when aggregated across forecast targets, weeks, and locations, even though it was rarely the single most accurate forecast for any single prediction[2]. The US COVID-19 Forecast Hub has built a specific set of open-source tools that have facilitated the development of operational stand-alone and ensemble forecasts for the pandemic. However, the structure of the tools is quite general and could be adapted for use in other real-time prediction efforts. Additionally, the Forecast Hub infrastructure and data described represent best practices for collecting, aggregating, and disseminating forecasts[31]. The US COVID-19 Forecast Hub has developed and operationalized one standardized forecast format, time-stamped submissions, open access, and a collection of tools to facilitate working with the data. The data in this hub will be useful in the future for continuing analysis and comparisons of forecasting methods. The data can also be used as an exploratory dataset for creating and testing novel models and methods for model analysis (e.g., new ways to create an ensemble or post hoc forecast calibration methods). Because the data serve as an open repository of the state of the art in infectious disease forecasting, they will also be helpful as a retrospective reference point for comparison when new forecasting models are developed. Model coordination efforts occur in many fields –including climate science[32], ecology[33], and space weather[34], among others– to inform policy decisions by curating many models and synthesizing their outputs and uncertainties. Such efforts ensure that individual model outputs may indeed be easily compared to and assimilated with one another, and thus play a role in making scientific research more rigorous and transparent. As the use of advanced computational models becomes more commonplace in a wide range of scientific fields, model coordination projects and model output standardization efforts will play an increasingly important role in ensuring that policy makers can be provided with a unified set of model outputs.

Methods

Forecast assumptions

Forecasters used a variety of assumptions to build models and generate predictions. Forecasting approaches include statistical or machine learning models, mechanistic models incorporating disease transmission dynamics, and combinations of multiple approaches[2]. Teams have also included varying assumptions regarding future changes in policies and social distancing measures, the transmissibility of COVID-19, vaccination rates, and the spread of new virus variants throughout the United States.

Weekly submissions

A forecast submission consists of a single comma-separated value (CSV) file submitted via pull request to the GitHub repository. Forecast submissions are validated for technical accuracy and formatting (see below) using automated checks implemented by continuous integration servers before being merged. To be included in the weekly ensemble model, teams were required to submit their forecast on Sunday or prior to a deadline on Monday. The majority of teams contributing to the dataset submitted forecasts to the Forecast Hub repository on Sunday or Monday, although some teams submitted at other times depending on their model production schedule.

Exclusion criteria

No forecasts were excluded from the dataset due to the forecast values or the background experience of the forecasters. Forecast files were only rejected if they did not meet the automatic formatting criteria implemented through automatic GitHub checks[35]. These included checks to ensure that, among other criteria: A forecast file is submitted no more than two days after it has been created (to ensure forecasts submitted were truly prospective). The creation date is based on the date in the filename created by the submitting team. The forecast dates in the content of the file are in the format YYYY-MM-DD and must match the creation date. Quantile forecasts do not contain any quantiles at probability levels other than the required levels (see Forecast Format section below).

Updates to files

To ensure that forecasting is done in real-time, all forecasts are required to be submitted to the Forecast Hub within 2 days of the forecast date, which is listed in a column within each forecast file. Although occasional late submissions were accepted through January 2021, the policy was updated to not accept late forecasts due to missed deadlines, updated modeling methods, or other reasons. Exceptions to this policy were made if there was a bug that affected the forecasts in the original submission or if a new team joined. If there was a bug, teams were required to submit a comment with their updated submission affirming that there was a bug and that the forecast was only produced using data that were available at the time of the original submission. In the case of updates to forecast data, both the old and updated versions of the forecasts can be accessed either through the GitHub commit history or through time-stamped queries of the forecasts in the Zoltar database. Note that an updated forecast can include “retracting” a particular set of predictions in the case when an initial forecast was not able to be updated. When new teams join the Forecast Hub, they can submit late forecasts if they can provide publicly available evidence that the forecasts were made in real-time (e.g., GitHub commit history).

Ground truth data

Data from the JHU CSSE dataset[36] are used as the ground truth data for cases and deaths. Data from the HealthData.gov system for state-level hospitalizations are used for the hospitalization outcome. JHU CSSE obtained counts of cases and deaths by collecting and aggregating reports from state and local health departments. HealthData.gov contains reports of hospitalizations assembled by the U.S. Department of Health and Human Services. Teams were encouraged to use these sources to build models. Although hospitalization forecasts were collected starting in March 2020, hospitalization data from HealthData.gov were only available later, and we started encouraging teams to target these data in November 2020. Some teams used alternate data sources, including the NYTimes, USAFacts, US Census data, and other signals[2]. Versions of truth data from JHU CSSE, USAFacts, and the NYTimes are stored in the GitHub repository. Previous reports of ground truth data for past time points were occasionally updated as new records became available, definitions of reportable cases, deaths, or hospitalizations changed, or errors in data collection were identified and corrected. These revisions to the data are sometimes quite substantial[35,36], and for purposes such as retrospective ensemble construction, it is necessary to use the data that would have been available in real-time. The historically versioned data can be accessed either through GitHub commit records, data versions released on HealthData.gov, or third-party tools such as the covidcast API provided by the Delphi group at Carnegie Mellon University or the covidData R package[37].

Model designation

Each model stored in the repository must have a classification of “primary,” “secondary”, or “other”. Each team must only have one “primary” model. Teams submitting multiple models with similar forecasting approaches can use the designations “secondary” or “other” for their models. Models with the designation “primary” are included in evaluations, the weekly ensemble, and the visualization. The “secondary” label is designed for models that have a substantive methodological difference than a team’s “primary” model. Models with the designation “secondary” are included only in the ensemble and the visualization. The “other” label is designed for models that are small variations on a team’s “primary” model. Models with the designation “other” are not included in evaluations, the ensemble build, or the visualization.

GitHub repository data structure

Forecasts in the GitHub repository are available in subfolders organized by model. Folders are named with a team name and model name, and each folder includes a metadata file and forecast files. Forecast CSV files are named using the format “--.csv”. In these files, each row contains data for a single outcome, location, horizon, and point or quantile prediction as described above. The metadata file for each team, named using the format “metadata--.txt”, contains relevant information about the team and the model that the team is using to generate forecasts.

Forecast format

Forecasts were required to be submitted in the format of point predictions and/or quantile predictions. Point predictions represented single “best” predictions with no uncertainty, typically representing a mean or median prediction from the model. Quantile predictions are an efficient format for storing predictive distributions of a wide range of outcomes. Quantile representations of predictive distributions lend themselves to natural computations of, for example, pinball loss or a weighted interval score, both proper scoring rules that can be used to evaluate forecasts[38]. However, they do not capture the structure of the tails of the predictive distribution beyond the reported quantiles. Additionally, the quantile format does not preserve any information on correlation structures between different outcomes. The forecast data in this dataset are stored in seven columns: forecast_date - the date the forecast was made in the format YYYY-MM-DD. target - a character string giving the number of days/weeks ahead that are being forecasted (horizon) and the outcome. Horizons must be one of the following: “N wk ahead cum death” where N is a number between 1 and 20 “N wk ahead inc death” where N is a number between 1 and 20 “N wk ahead inc case” where N is a number between 1 and 8 “N day ahead inc hosp” where N is a number between 0 and 130 target_end_date - a character string representing the date for the forecast target in the format YYYY-MM-DD. For “k day-ahead” targets, target_end_date will be k days after forecast_date. For “k week ahead” targets, target_end_date will be the Saturday at the end of the specified epidemic week, as described above. location - character string of Federal Information Processing Standard Publication (FIPS) codes identifying U.S. states, counties, territories, and districts as well as “US” for national forecasts. The values for the FIPS codes are available in a CSV file in the repository and as a data object in the covidHubUtils R package for convenience. type - character value of “point” or “quantile” indicating whether the row corresponds to a point forecast or a quantile forecast. quantile - the probability level for a quantile forecast. For death and hospitalization forecasts, forecasters can submit quantiles at 23 probability levels: 0.01, 0.025, 0.05, 0.10, 0.15…, 0.95, 0.975, and 0.99. For cases, teams can submit up to 7 quantiles at levels .025, 0.100, 0.250, 0.5, 0.750, 0.900 and 0.975. If the forecast “type” is equal to “point”, the value in the quantile column is equal to “NA”. value – non-negative numbers indicating the “point” or “quantile” prediction for the row. For a “point” prediction, the value is simply the value of that point prediction for the target and location associated with that row. For a “quantile” prediction, the model predicts that the eventual observation will be less than or equal to this value with the probability given by the quantile probability level.

Metadata format

Each team documents their model information in a metadata file which is required along with the first forecast submission. Each team is asked to record their model’s design and assumptions, the model contributors, the team’s website, information regarding the team’s data sources, and a brief model description. Teams may update their metadata file periodically to keep track of minor changes to a model. A standard metadata file should be a YAML file with the following required fields in a specific order: team_name - the name of the team (less than 50 characters). model_name - the name of the model (less than 50 characters). model_abbr - an abbreviated and uniquely identified name for the model that is less than 30 alphanumeric characters. The model abbreviation must be in the format of ‘[team_abbr]-[model_abbr]‘ where each of the ‘[team_abbr]‘ and ‘[model_abbr]‘ are text strings that are each less than 15 alphanumeric characters that do not include a hyphen or whitespace. model_contributors - a list of all individuals involved in the forecasting effort, affiliations, and email addresses. At least one contributor needs to have a valid email address. The syntax of this field should be name1 (affiliation1) , name2 (affiliation2) website_url* - a URL to a website that has additional data about the model. We encourage teams to submit the most user-friendly version of the model, e.g., a dashboard, or similar, that displays the model forecasts. If there is an additional data repository where forecasts and other model code are stored, this can be included in the methods section. If only a more technical site, e.g., GitHub repo, exists, that link should be included here. license - one of the acceptable license types in the Forecast Hub. We encourage teams to submit as a “cc-by-4.0” to allow the broadest possible use, including private vaccine production (which would be excluded by the “cc-by-nc-4.0” license). If the value is “LICENSE.txt”, then a LICENSE.txt file must exist within the model folder and provide a license. team_model_designation - upon initial submission this field should be one of “primary”, “secondary” or “other”. methods - a brief description of the forecasting methodology that is less than 200 Characters. ensemble_of_hub_models - a Boolean value (‘true‘ or ‘false‘) that indicates whether a model combines multiple hub models into an ensemble. *in earlier versions of the metadata files, this field was named model_output. Teams are also encouraged to add model information with optional fields described below: institution_affil - University or company names, if relevant. team_funding - Like an acknowledgement in a manuscript, teams can acknowledge funding here. repo_url - A GitHub repository url or something similar. twitter_handles - one or more Twitter handles (without the @) separated by commas. data_inputs - A description of the data sources used to inform the model and the truth data targeted by model forecasts. Common data sources are NYTimes, JHU CSSE, COVIDTracking, Google mobility, HHS hospitalization, etc. An example description could be “case forecasts use NYTimes data and target JHU CSSE truth data, hospitalization forecasts use and target HHS hospitalization data” citation - a url (doi link preferred) to an extended description of the model, e.g., blog post, website, preprint, or peer-reviewed manuscript. methods_long - An extended description of the methods used in the model. If the model is modified, this field can be used to provide the date of the modification and a description of the change.

Technical Validations

Two similar but distinct validation processes were used to validate data on the GitHub repository and on Zoltar.

Validations during data submission

Validations were set up using GitHub Actions to manage the continuous integration and automated data checking[35]. Teams submitted their metadata files and forecasts through pull requests on GitHub. Each time a new pull request was submitted, a validation script ran on all new or updated files in the pull request to test for their validity. Separate checks ran on metadata file changes and forecast data file changes. The metadata file for each team was required to be in a valid YAML format, and a set of specific checks were required before a new metadata file could be merged into the repository. Checks included ensuring that all metadata files are using the rules outlined in the Metadata Format section, that the proposed team and model names do not conflict with existing names, that a valid license for data reuse is specified, and that a valid model designation was present. Additionally, each team must have their files under a folder named consistently with their model_abbr, and they must only have one primary model. New or changed forecast data files for each team were required to pass a series of checks for data formatting and validity. These checks also ensured that the forecast data files did not meet any of the exclusion criteria (see the Methods section for specific rules). Each forecast file is subject to the validation rules documented at: https://github.com/reichlab/covid19-forecast-hub/wiki/Forecast-Checks.

Validations on Zoltar

When a new forecast file is uploaded to Zoltar, unit tests are run on the file to ensure that forecast elements contain a valid structure. (For a detailed specification of the structure of forecast elements, see https://docs.zoltardata.com/validation/.) If a forecast file does not pass all unit tests, the upload will fail and the forecast file will not be added to the database; only when all tests pass will the new forecast be added to Zoltar. The validations in place on GitHub ensure that only valid forecasts will be uploaded to Zoltar.

Truth data

Raw truth data from multiple sources including JHU, NYTimes, USAFacts, and Healthdata.gov, were downloaded and reformatted using the scripts in the R packages covidHubUtils (https://github.com/reichlab/covidHubUtils) and covidData (https://github.com/reichlab/covidData. This data generating process is automated by GitHub Actions every week, and the results (called “truth data”) are directly uploaded to the Forecast Hub repository and Zoltar. Specifically, case and death raw truth data were aggregated to a weekly level, and all three outcomes (cases, deaths, and hospitalization) are reformatted for use within the Forecast Hub.
  15 in total

1.  "A 30% chance of rain tomorrow": how does the public understand probabilistic weather forecasts?

Authors:  Gerd Gigerenzer; Ralph Hertwig; Eva van den Broek; Barbara Fasolo; Konstantinos V Katsikopoulos
Journal:  Risk Anal       Date:  2005-06       Impact factor: 4.000

2.  Collaborative Hubs: Making the Most of Predictive Epidemic Modeling.

Authors:  Nicholas G Reich; Justin Lessler; Sebastian Funk; Cecile Viboud; Alessandro Vespignani; Ryan J Tibshirani; Katriona Shea; Melanie Schienle; Michael C Runge; Roni Rosenfeld; Evan L Ray; Rene Niehus; Helen C Johnson; Michael A Johansson; Harry Hochheiser; Lauren Gardner; Johannes Bracher; Rebecca K Borchering; Matthew Biggerstaff
Journal:  Am J Public Health       Date:  2022-04-14       Impact factor: 11.561

3.  Covid-19 pandemic and the unprecedented mobilisation of scholarly efforts prompted by a health crisis: Scientometric comparisons across SARS, MERS and 2019-nCoV literature.

Authors:  Milad Haghani; Michiel C J Bliemer
Journal:  Scientometrics       Date:  2020-09-21       Impact factor: 3.238

4.  Modeling of Future COVID-19 Cases, Hospitalizations, and Deaths, by Vaccination Rates and Nonpharmaceutical Intervention Scenarios - United States, April-September 2021.

Authors:  Rebecca K Borchering; Cécile Viboud; Emily Howerton; Claire P Smith; Shaun Truelove; Michael C Runge; Nicholas G Reich; Lucie Contamin; John Levander; Jessica Salerno; Wilbert van Panhuis; Matt Kinsey; Kate Tallaksen; R Freddy Obrecht; Laura Asher; Cash Costello; Michael Kelbaugh; Shelby Wilson; Lauren Shin; Molly E Gallagher; Luke C Mullany; Kaitlin Rainwater-Lovett; Joseph C Lemaitre; Juan Dent; Kyra H Grantz; Joshua Kaminsky; Stephen A Lauer; Elizabeth C Lee; Hannah R Meredith; Javier Perez-Saez; Lindsay T Keegan; Dean Karlen; Matteo Chinazzi; Jessica T Davis; Kunpeng Mu; Xinyue Xiong; Ana Pastore Y Piontti; Alessandro Vespignani; Ajitesh Srivastava; Przemyslaw Porebski; Srinivasan Venkatramanan; Aniruddha Adiga; Bryan Lewis; Brian Klahn; Joseph Outten; James Schlitt; Patrick Corbett; Pyrros Alexander Telionis; Lijing Wang; Akhil Sai Peddireddy; Benjamin Hurt; Jiangzhuo Chen; Anil Vullikanti; Madhav Marathe; Jessica M Healy; Rachel B Slayton; Matthew Biggerstaff; Michael A Johansson; Katriona Shea; Justin Lessler
Journal:  MMWR Morb Mortal Wkly Rep       Date:  2021-05-14       Impact factor: 35.301

5.  Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States.

Authors:  Estee Y Cramer; Evan L Ray; Velma K Lopez; Johannes Bracher; Andrea Brennen; Alvaro J Castro Rivadeneira; Aaron Gerding; Tilmann Gneiting; Katie H House; Yuxin Huang; Dasuni Jayawardena; Abdul H Kanji; Ayush Khandelwal; Khoa Le; Anja Mühlemann; Jarad Niemi; Apurv Shah; Ariane Stark; Yijin Wang; Nutcha Wattanachit; Martha W Zorn; Youyang Gu; Sansiddh Jain; Nayana Bannur; Ayush Deva; Mihir Kulkarni; Srujana Merugu; Alpan Raval; Siddhant Shingi; Avtansh Tiwari; Jerome White; Neil F Abernethy; Spencer Woody; Maytal Dahan; Spencer Fox; Kelly Gaither; Michael Lachmann; Lauren Ancel Meyers; James G Scott; Mauricio Tec; Ajitesh Srivastava; Glover E George; Jeffrey C Cegan; Ian D Dettwiller; William P England; Matthew W Farthing; Robert H Hunter; Brandon Lafferty; Igor Linkov; Michael L Mayo; Matthew D Parno; Michael A Rowland; Benjamin D Trump; Yanli Zhang-James; Samuel Chen; Stephen V Faraone; Jonathan Hess; Christopher P Morley; Asif Salekin; Dongliang Wang; Sabrina M Corsetti; Thomas M Baer; Marisa C Eisenberg; Karl Falb; Yitao Huang; Emily T Martin; Ella McCauley; Robert L Myers; Tom Schwarz; Daniel Sheldon; Graham Casey Gibson; Rose Yu; Liyao Gao; Yian Ma; Dongxia Wu; Xifeng Yan; Xiaoyong Jin; Yu-Xiang Wang; YangQuan Chen; Lihong Guo; Yanting Zhao; Quanquan Gu; Jinghui Chen; Lingxiao Wang; Pan Xu; Weitong Zhang; Difan Zou; Hannah Biegel; Joceline Lega; Steve McConnell; V P Nagraj; Stephanie L Guertin; Christopher Hulme-Lowe; Stephen D Turner; Yunfeng Shi; Xuegang Ban; Robert Walraven; Qi-Jun Hong; Stanley Kong; Axel van de Walle; James A Turtle; Michal Ben-Nun; Steven Riley; Pete Riley; Ugur Koyluoglu; David DesRoches; Pedro Forli; Bruce Hamory; Christina Kyriakides; Helen Leis; John Milliken; Michael Moloney; James Morgan; Ninad Nirgudkar; Gokce Ozcan; Noah Piwonka; Matt Ravi; Chris Schrader; Elizabeth Shakhnovich; Daniel Siegel; Ryan Spatz; Chris Stiefeling; Barrie Wilkinson; Alexander Wong; Sean Cavany; Guido España; Sean Moore; Rachel Oidtman; Alex Perkins; David Kraus; Andrea Kraus; Zhifeng Gao; Jiang Bian; Wei Cao; Juan Lavista Ferres; Chaozhuo Li; Tie-Yan Liu; Xing Xie; Shun Zhang; Shun Zheng; Alessandro Vespignani; Matteo Chinazzi; Jessica T Davis; Kunpeng Mu; Ana Pastore Y Piontti; Xinyue Xiong; Andrew Zheng; Jackie Baek; Vivek Farias; Andreea Georgescu; Retsef Levi; Deeksha Sinha; Joshua Wilde; Georgia Perakis; Mohammed Amine Bennouna; David Nze-Ndong; Divya Singhvi; Ioannis Spantidakis; Leann Thayaparan; Asterios Tsiourvas; Arnab Sarker; Ali Jadbabaie; Devavrat Shah; Nicolas Della Penna; Leo A Celi; Saketh Sundar; Russ Wolfinger; Dave Osthus; Lauren Castro; Geoffrey Fairchild; Isaac Michaud; Dean Karlen; Matt Kinsey; Luke C Mullany; Kaitlin Rainwater-Lovett; Lauren Shin; Katharine Tallaksen; Shelby Wilson; Elizabeth C Lee; Juan Dent; Kyra H Grantz; Alison L Hill; Joshua Kaminsky; Kathryn Kaminsky; Lindsay T Keegan; Stephen A Lauer; Joseph C Lemaitre; Justin Lessler; Hannah R Meredith; Javier Perez-Saez; Sam Shah; Claire P Smith; Shaun A Truelove; Josh Wills; Maximilian Marshall; Lauren Gardner; Kristen Nixon; John C Burant; Lily Wang; Lei Gao; Zhiling Gu; Myungjin Kim; Xinyi Li; Guannan Wang; Yueying Wang; Shan Yu; Robert C Reiner; Ryan Barber; Emmanuela Gakidou; Simon I Hay; Steve Lim; Chris Murray; David Pigott; Heidi L Gurung; Prasith Baccam; Steven A Stage; Bradley T Suchoski; B Aditya Prakash; Bijaya Adhikari; Jiaming Cui; Alexander Rodríguez; Anika Tabassum; Jiajia Xie; Pinar Keskinocak; John Asplund; Arden Baxter; Buse Eylul Oruc; Nicoleta Serban; Sercan O Arik; Mike Dusenberry; Arkady Epshteyn; Elli Kanal; Long T Le; Chun-Liang Li; Tomas Pfister; Dario Sava; Rajarishi Sinha; Thomas Tsai; Nate Yoder; Jinsung Yoon; Leyou Zhang; Sam Abbott; Nikos I Bosse; Sebastian Funk; Joel Hellewell; Sophie R Meakin; Katharine Sherratt; Mingyuan Zhou; Rahi Kalantari; Teresa K Yamana; Sen Pei; Jeffrey Shaman; Michael L Li; Dimitris Bertsimas; Omar Skali Lami; Saksham Soni; Hamza Tazi Bouardi; Turgay Ayer; Madeline Adee; Jagpreet Chhatwal; Ozden O Dalgic; Mary A Ladd; Benjamin P Linas; Peter Mueller; Jade Xiao; Yuanjia Wang; Qinxia Wang; Shanghong Xie; Donglin Zeng; Alden Green; Jacob Bien; Logan Brooks; Addison J Hu; Maria Jahja; Daniel McDonald; Balasubramanian Narasimhan; Collin Politsch; Samyak Rajanala; Aaron Rumack; Noah Simon; Ryan J Tibshirani; Rob Tibshirani; Valerie Ventura; Larry Wasserman; Eamon B O'Dea; John M Drake; Robert Pagano; Quoc T Tran; Lam Si Tung Ho; Huong Huynh; Jo W Walker; Rachel B Slayton; Michael A Johansson; Matthew Biggerstaff; Nicholas G Reich
Journal:  Proc Natl Acad Sci U S A       Date:  2022-04-08       Impact factor: 12.779

6.  The Zoltar forecast archive, a tool to standardize and store interdisciplinary prediction research.

Authors:  Nicholas G Reich; Matthew Cornell; Evan L Ray; Katie House; Khoa Le
Journal:  Sci Data       Date:  2021-02-11       Impact factor: 6.444

7.  Evaluating epidemic forecasts in an interval format.

Authors:  Johannes Bracher; Evan L Ray; Tilmann Gneiting; Nicholas G Reich
Journal:  PLoS Comput Biol       Date:  2021-02-12       Impact factor: 4.779

8.  An open repository of real-time COVID-19 indicators.

Authors:  Alex Reinhart; Logan Brooks; Maria Jahja; Aaron Rumack; Jingjing Tang; Sumit Agrawal; Wael Al Saeed; Taylor Arnold; Amartya Basu; Jacob Bien; Ángel A Cabrera; Andrew Chin; Eu Jing Chua; Brian Clark; Sarah Colquhoun; Nat DeFries; David C Farrow; Jodi Forlizzi; Jed Grabman; Samuel Gratzl; Alden Green; George Haff; Robin Han; Kate Harwood; Addison J Hu; Raphael Hyde; Sangwon Hyun; Ananya Joshi; Jimi Kim; Andrew Kuznetsov; Wichada La Motte-Kerr; Yeon Jin Lee; Kenneth Lee; Zachary C Lipton; Michael X Liu; Lester Mackey; Kathryn Mazaitis; Daniel J McDonald; Phillip McGuinness; Balasubramanian Narasimhan; Michael P O'Brien; Natalia L Oliveira; Pratik Patil; Adam Perer; Collin A Politsch; Samyak Rajanala; Dawn Rucker; Chris Scott; Nigam H Shah; Vishnu Shankar; James Sharpnack; Dmitry Shemetov; Noah Simon; Benjamin Y Smith; Vishakha Srivastava; Shuyi Tan; Robert Tibshirani; Elena Tuzhilina; Ana Karina Van Nortwick; Valérie Ventura; Larry Wasserman; Benjamin Weaver; Jeremy C Weiss; Spencer Whitman; Kristin Williams; Roni Rosenfeld; Ryan J Tibshirani
Journal:  Proc Natl Acad Sci U S A       Date:  2021-12-21       Impact factor: 12.779

9.  Combining probabilistic forecasts of COVID-19 mortality in the United States.

Authors:  James W Taylor; Kathryn S Taylor
Journal:  Eur J Oper Res       Date:  2021-06-28       Impact factor: 6.363

10.  Accuracy of real-time multi-model ensemble forecasts for seasonal influenza in the U.S.

Authors:  Nicholas G Reich; Craig J McGowan; Teresa K Yamana; Abhinav Tushar; Evan L Ray; Dave Osthus; Sasikiran Kandula; Logan C Brooks; Willow Crawford-Crudell; Graham Casey Gibson; Evan Moore; Rebecca Silva; Matthew Biggerstaff; Michael A Johansson; Roni Rosenfeld; Jeffrey Shaman
Journal:  PLoS Comput Biol       Date:  2019-11-22       Impact factor: 4.475

View more
  5 in total

1.  Comparing trained and untrained probabilistic ensemble forecasts of COVID-19 cases and deaths in the United States.

Authors:  Evan L Ray; Logan C Brooks; Jacob Bien; Matthew Biggerstaff; Nikos I Bosse; Johannes Bracher; Estee Y Cramer; Sebastian Funk; Aaron Gerding; Michael A Johansson; Aaron Rumack; Yijin Wang; Martha Zorn; Ryan J Tibshirani; Nicholas G Reich
Journal:  Int J Forecast       Date:  2022-07-01

2.  Vote-processing rules for combining control recommendations from multiple models.

Authors:  William J M Probert; Sam Nicol; Matthew J Ferrari; Shou-Li Li; Katriona Shea; Michael J Tildesley; Michael C Runge
Journal:  Philos Trans A Math Phys Eng Sci       Date:  2022-08-15       Impact factor: 4.019

3.  Aggregating Human Judgment Probabilistic Predictions of Coronavirus Disease 2019 Transmission, Burden, and Preventive Measures.

Authors:  Allison Codi; Damon Luk; David Braun; Juan Cambeiro; Tamay Besiroglu; Eva Chen; Luis Enrique Urtubey de Cesaris; Paolo Bocchini; Thomas McAndrew
Journal:  Open Forum Infect Dis       Date:  2022-07-25       Impact factor: 4.423

4.  ModInterv: An automated online software for modeling epidemics.

Authors:  Arthur A Brum; Gerson C Duarte-Filho; Raydonal Ospina; Francisco A G Almeida; Antônio M S Macêdo; Giovani L Vasconcelos
Journal:  Softw Impacts       Date:  2022-08-13

Review 5.  Usage of Compartmental Models in Predicting COVID-19 Outbreaks.

Authors:  Peijue Zhang; Kairui Feng; Yuqing Gong; Jieon Lee; Sara Lomonaco; Liang Zhao
Journal:  AAPS J       Date:  2022-09-02       Impact factor: 3.603

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.