| Literature DB >> 35845843 |
Paul Chong1, Byung-Jun Yoon2,3, Debbie Lai4,5, Michael Carlson4,6, Jarone Lee7, Shuhan He8.
Abstract
Covid Act Now (CAN) developed an epidemiological model that takes various non-pharmaceutical interventions (NPIs) into account and predicts viral spread and subsequent health outcomes. In this study, the projections of the model developed by CAN were back-tested against real-world data, and it was found that the model consistently overestimated hospitalizations and deaths by 25%-100% and 70%-170%, respectively, due in part to an underestimation of the efficacy of NPIs. Other COVID models were also back-tested against historical data, and it was found that all models generally captured the potential magnitude and directionality of the pandemic in the short term. There are limitations to epidemiological models, but understanding these limitations enables these models to be utilized as tools for data-driven decision-making in viral outbreaks. Further, it can be valuable to have multiple, independently developed models to mitigate the inaccuracies of or to correct for the incorrect assumptions made by a particular model.Entities:
Keywords: COVID-19; COVID-19 SEIR; COVID-19 epidemiological model; COVID-19 model; COVID-19 non-pharmaceutical interventions; COVID-19 vaccination; SEIR model; data science; epidemiological model
Year: 2022 PMID: 35845843 PMCID: PMC9278499 DOI: 10.1016/j.patter.2022.100492
Source DB: PubMed Journal: Patterns (N Y) ISSN: 2666-3899
Performance of successive CAN models
| Model | RMSE for hospitalizations | RMSE for deaths | 2-week RMSE for hospitalizations | 2-week RMSE for deaths |
|---|---|---|---|---|
| 3.19 | 44.65 | 72.55 | 27.98 | 25.22 |
| 3.31 | 85.93 | 51.22 | 54.90 | 24.54 |
| 4.09 | 73.07 | 51.74 | 82.91 | 47.47 |
| 4.14 | 10.30 | 17.61 | 17.15 | 24.35 |
Successive iterations of the CAN epidemiological model were evaluated for performance for comparison purposes between both consecutive models and 2-week performance. Analysis shows the improvement of performance with newer models, evidenced by decreasing RMSE (root-mean-square error) of hospitalizations and deaths of successive iterations. RMSE compares a predicted value and a known value, with smaller RMSE values indicating closeness of predicted and observed values. RMSE was calculated with model iteration predictions in 3-week intervals starting from March 5th, March 25th, April 14th, and May 4th, respectively.
Comparison of model predictions of California COVID-19 hospitalizations
| CAN iteration | RMSE of predictions | IHME iteration | RMSE of predictions |
|---|---|---|---|
| 1,528.02 | 1,764.53 | ||
| 1,518.35 | 2,241.23 | ||
| 1,326.82 | 2,667.21 | ||
| 1,145.39 | 2,633.40 | ||
| 4,299.12 | 2,791.76 |
The CAN and IHME models were the only models to predict COVID-19 hospitalizations, and their respective performances are displayed above for comparison purposes. The superior performance of the CAN model can be seen in the lower RMSEs (root-mean-square errors) compared with that of IHME. RMSE compares a predicted value and a known value, with smaller RMSE values indicating closeness of predicted and observed values. Predictions of successive iterations of each model were evaluated against historical data from March 4th to July 19th, 2020.
Figure 1Performance of models’ California COVID-19 death predictions
Pictured are the performances of predictions by epidemiological models evaluated against historical data for COVID-19 deaths in the state of California from June 16th to July 13th, 2020. Dates in parentheses to the right of models’ abbreviations in the figure legend correspond to the date of the model’s predictions in the year 2020.
Figure 2RMSEs of models’ California COVID-19 death predictions tables
Pictured are the RMSEs (root mean square error) of epidemiological models’ predictions of COVID-19 deaths from May 19th to July 19th, 2020. RMSE compares a predicted value and a known value, with smaller RMSE values indicating closeness of predicted and observed values. The x axis aggregates the five back-testing periods of evaluation of the models in addition to the overall average RMSE of each model for COVID-19 deaths.