| Literature DB >> 35636313 |
Mark J Panaggio1, Kaitlin Rainwater-Lovett2, Paul J Nicholas2, Mike Fang2, Hyunseung Bang2, Jeffrey Freeman2, Elisha Peterson2, Samuel Imbriale3.
Abstract
During the COVID-19 pandemic, concerns about hospital capacity in the United States led to a demand for models that forecast COVID-19 hospital admissions. These short-term forecasts were needed to support planning efforts by providing decision-makers with insight about future demands for health care capacity and resources. We present a SARIMA time-series model called Gecko developed for this purpose. We evaluate its historical performance using metrics such as mean absolute error, predictive interval coverage, and weighted interval scores, and compare to alternative hospital admission forecasting models. We find that Gecko outperformed baseline approaches and was among the most accurate models for forecasting hospital admissions at the state and national levels from January-May 2021. This work suggests that simple statistical methods can provide a viable alternative to traditional epidemic models for short-term forecasting.Entities:
Keywords: COVID-19; Coronavirus disease; Forecasting; Hospitalization; SARIMA; SARS-CoV-2; Time-series model
Mesh:
Year: 2022 PMID: 35636313 PMCID: PMC9124631 DOI: 10.1016/j.epidem.2022.100580
Source DB: PubMed Journal: Epidemics ISSN: 1878-0067 Impact factor: 5.324
Fig. 1Confirmed COVID-19 hospital admissions (U.S.) and weekly forecasts from Gecko model. Only forecasts for the next 7 days are shown for clarity. Here the black curve represents the observed totals and the colored curves represent each forecast with shaded bands representing the 50% (dark) and 95% (light) predictive intervals. (For interpretation of the references to color in this figure legend, the reader is referred to the web version of this article.)
Parameters and hyperparameters for SARIMA model. The hyperparameters (rows 1–7) were selected using a grid search to optimize the fit to historical data as measured by the Akaike Information Criterion (Akaike, 1998) and were shared across all states to avoid over-fitting. The parameters (rows 8–11) were fitted using maximum likelihood estimation and were refitted to the available historical data for each state each time forecasts were produced.
| Parameter | Description | Fitting method | Search space | Selected value |
|---|---|---|---|---|
| autoregressive order | grid search | 1 | ||
| difference order | grid search | 1 | ||
| moving average order | grid search | 0 | ||
| seasonal autoregressive order | grid search | 1 | ||
| seasonal difference order | grid search | 1 | ||
| seasonal moving average order | grid search | 0 | ||
| seasonal period | fixed | 7 | 7 | |
| autoregressive parameter | MLE | varies | ||
| seasonal autoregressive parameter | MLE | varies | ||
| variance of process noise | MLE | varies | ||
| variance of measurement noise | MLE | varies |
Fig. 2Comparison of forecasted and observed confirmed COVID-19 hospital admissions by state according to Gecko model. The left panel shows the raw number of admission and the right panel shows the change in admissions over a 7 day horizon.
Fig. 3MAE (top) and MAPE (middle) and coverage (bottom) for 7 day hospital admission forecasts from Gecko model by state. States are sorted by MAPE.
Average MAE for confirmed COVID-19 hospital admission forecasts. The ten models listed are those that consistently provided forecasts for inclusion in the COVIDhub-ensemble (bottom row). Displayed values indicate the average MAE over all weeks where forecasts were available. The rank is listed in parenthesis.
| Horizon | National forecast | State forecasts | ||||
|---|---|---|---|---|---|---|
| 7 | 14 | 21 | 7 | 14 | 21 | |
| COVID19Sim-Simulator ( | 2817.3 (10) | 3108.4 (10) | 3771.3 (10) | 73.8 (10) | 76.7 (10) | 84.0 (10) |
| CU-nochange ( | 1074.3 (6) | 1712.7 (6) | 2702.1 (9) | 28.0 (4) | 40.5 (3) | 61.8 (8) |
| GT-DeepCOVID ( | 1069.9 (5) | 1745.6 (8) | 2536.3 (7) | 27.8 (3) | 41.2 (6) | 60.1 (7) |
| JHUAPL-BUCKY ( | 996.0 (4) | 45.4 (7) | 50.5 (8) | 58.0 (6) | ||
| JHUAPL-GECKO | 1175.7 (2) | 1950.5 (4) | 37.6 (2) | 53.9 (5) | ||
| Karlen-pypm ( | 986.1 (3) | 1661.6 (5) | 2547.9 (8) | 25.4 (2) | 51.4 (3) | |
| LANL-GrowthRate ( | 2222.3 (9) | 2149.7 (9) | 1979.7 (5) | 52.2 (8) | 48.3 (7) | |
| MOBS-GLEAM _COVID ( | 877.7 (2) | 1253.4 (3) | 1794.8 (3) | 32.3 (6) | 40.9 (5) | 52.3 (4) |
| UCLA-SuEIR ( | 1392.6 (8) | 1343.0 (4) | 1508.7 (2) | 66.0 (9) | 63.8 (9) | 62.1 (9) |
| USC-SI_kJ | 1107.1 (7) | 1717.4 (7) | 2103.1 (6) | 28.5 (5) | 40.6 (4) | 49.3 (2) |
| COVIDhub-ensemble ( | 825.3 | 1213.6 | 1755.8 | 22.6 | 30.8 | 41.6 |
Average WIS for hospitalization forecasts between January 11, 2021 and May 31, 2021. The ten models listed are those that consistently provided forecasts for inclusion in the COVIDhub-ensemble (bottom row). Displayed values indicate the average MAE over all weeks where forecasts were available. The rank is listed in parenthesis. WIS are calculated using the median and 50% and 95% predictive intervals.
| Horizon | National forecast | State forecasts | ||||
|---|---|---|---|---|---|---|
| 7 | 14 | 21 | 7 | 14 | 21 | |
| COVID19Sim-Simulator ( | 2342.7 (10) | 2606.8 (10) | 3200.5 (10) | 63.7 (10) | 64.9 (10) | 70.8 (10) |
| CU-nochange ( | 847.3 (6) | 1149.2 (7) | 1776.5 (9) | 20.3 (4) | 27.9 (5) | 41.9 (8) |
| GT-DeepCOVID ( | 627.3 (4) | 1094.1 (6) | 1651.9 (7) | 17.8 (3) | 27.0 (4) | 41.7 (7) |
| JHUAPL-BUCKY ( | 690.3 (5) | 29.7 (7) | 33.0 (7) | 37.6 (5) | ||
| JHUAPL-GECKO | 742.7 (2) | 1184.1 (4) | 16.8 (2) | 25.0 (2) | 34.1 (4) | |
| Karlen-pypm ( | 561.4 (2) | 938.5 (4) | 1549.0 (6) | |||
| LANL-GrowthRate ( | 1373.7 (9) | 1214.4 (8) | 1139.1 (3) | 34.0 (8) | 32.0 (6) | 31.7 (2) |
| MOBS-GLEAM_COVID ( | 606.8 (3) | 768.8 (3) | 1023.1 (2) | 20.7 (5) | 25.6 (3) | 32.5 (3) |
| UCLA-SuEIR ( | 1064.8 (8) | 1004.9 (5) | 1196.8 (5) | 59.4 (9) | 55.7 (9) | 53.3 (9) |
| USC-SI_kJ | 887.3 (7) | 1402.3 (9) | 1656.4 (8) | 24.4 (6) | 34.4 (8) | 41.0 (6) |
| COVIDhub-ensemble ( | 460.5 | 733.9 | 1065.2 | 13.6 | 18.6 | 24.9 |
Fig. 4Percentage of forecasts that outperform a lagged baseline model according to WIS. Horizons of 5, 6, 12, 13, 19, and 20 days correspond to weekends which generally have noticeably lower hospital admission totals than the Monday forecast date leading to worse baseline performance for those horizons.