Literature DB >> 36150779

Real-time COVID-19 forecasting: challenges and opportunities of model performance and translation.

Kristen Nixon1, Sonia Jindal1, Felix Parker1, Maximilian Marshall1, Nicholas G Reich2, Kimia Ghobadi1, Elizabeth C Lee3, Shaun Truelove3, Lauren Gardner4.   

Abstract

Entities:  

Mesh:

Year:  2022        PMID: 36150779      PMCID: PMC9499327          DOI: 10.1016/S2589-7500(22)00167-4

Source DB:  PubMed          Journal:  Lancet Digit Health        ISSN: 2589-7500


× No keyword cloud information.
The COVID-19 pandemic brought mathematical modelling into the spotlight, as scientists rushed to use data to understand transmission patterns and disease severity, and to anticipate future epidemic outcomes. However, the use of COVID-19 modelling has been criticised, in part because of a few particularly erroneous projections at the start of the pandemic. More than 2 years into the pandemic, models continue to face serious obstacles as tools for informing outbreak response. Population-level health outcomes are difficult to predict accurately, especially cases and hospitalisations, as discussed in the International Institute of Forecasters blog. This Comment, drawn from our experiences with real-time prospective COVID-19 modelling, details these obstacles. We aim to highlight areas where further research and investment can improve the use of models for informing outbreak responses in the USA, with a summary of recommendations in the Panel . Invest in infrastructure for data collection Prioritise collection of timely high temporal and spatial resolution data Standardise reporting of data across jurisdictions (eg, US states) Pursue high-quality data that captures risk-reduction behaviours Expand genomic surveillance Prioritise translational work Adopt Pollett and colleagues’ EPIFORGE guidelines to improve model transparency Document and share experiences on translational work and lessons learned Control public messaging around research Consider more interpretable targets and ways to express uncertainty Adopt incentive structures in academia to reward translational work Build an information-sharing ecosystem that is better suited to the needs of outbreaks Strike a balance between speed and quality of publications Implement safeguards to prevent misuse of research by the public Create an organised, centralised home for epidemic research Data quality is one of the most important drivers of model performance. If data are inconsistent or do not reflect reality, models have no reliable ground truth from which to learn or be evaluated. Unfortunately, the public health infrastructure in the USA was not equipped to provide timely, high-quality data on COVID-19 health outcomes, and required several disparate efforts to fill this need. However, inherent flaws remain in the COVID-19 data reporting system. For example, decision making on how to collect and share COVID-19 data fell to individual US states. Each US state has its own reporting idiosyncrasies (eg, defining what counts as a COVID-19 case or death, whether this definition includes probable cases or deaths, and how to define a probable case or death), limiting comparative analyses across locations. Additionally, artificial spikes or drops in the reported numbers of COVID-19 cases and deaths, which can result from backlogged testing results released from resource-constrained laboratories or batch death certificate reviews conducted by states, occur frequently and with irregular pattern, and affect both the training and evaluation of models that rely on the data. Other COVID-19 data, such as vaccinations, testing, hospitalisations, and genomic surveillance, have their own quality issues, largely because of an inadequate data reporting infrastructure, absence of universal data standards, and sampling bias. In addition to data on health outcomes, many modellers have relied on human behavioural data for COVID-19 forecasting and scenario analysis with the aim to predict transmission patterns more accurately, in particular at points when dynamics are rapidly changing. However, it is difficult to collect real-time behavioural data because human behaviour is inherently hard to track. Some COVID-19 risk-reduction behaviours were captured through surveys administered on Facebook, which represents a substantial step forward in collecting open and timely behavioural data; however, these data still have sampling and self-reporting bias, and data collection ended in June 25, 2022. New variants have also played a considerable role in surges in the number of COVID-19 ases and deaths worldwide. To this end, increased genomic surveillance has the potential to inform and improve predictions. As of Dec 31, 2021, only 5% of cases in the USA are sequenced, compared with more than 50% in other countries, including the UK, Iceland, and Australia. To give modellers the best chance of success, we need to invest in a data system that provides open, timely, and standardised data at a high spatial and temporal resolution. Because of the uncertainty and fear surrounding this unprecedented outbreak, the scale of which has not been witnessed before, modelling results were sensationalised by the media and skewed to serve predetermined political purposes. Given that the misunderstanding of scientific findings can have serious consequences, modellers have a responsibility to facilitate appropriate interpretation of their work. Modellers must be explicit in stating how assumptions and limitations should shape interpretation, and conduct transparent reporting as outlined in Pollett and colleagues’ EPIFORGE guidelines. Additionally, modellers should be trained to communicate directly with the media to better explain the science and to help manage the corresponding public health messaging. Models can also guide public health policy. To inform decision makers, the best approach is often direct collaboration with modellers. These mutually beneficial relationships allow modellers to better understand the needs of decision makers and help all stakeholders to better understand the details and limitations of epidemic and pandemic modelling. In addition, documenting the process of sharing models with decision makers is crucial to advance knowledge of best practices for science translation. One aspect of modelling that could be redesigned for easier interpretation and use by various stakeholders and the public is the selection of prediction targets. These targets have predominantly been the numbers of incident cases and deaths, despite poor forecast performance for these data during crucial moments for decision making, as discussed in the Forecasters blog. Simpler and more interpretable targets that still convey useful information should be considered as alternatives. One example is a categorical target that predicts if any indicators (eg, cases, deaths, or hospitalisations) in a future period will be in a state of rapid growth, moderate growth, no change, moderate decline, or rapid decline. Predicting a broader range of targets, especially if some targets allow increased forecast accuracy and reliability, could enhance public trust in modelling and better meet the needs of stakeholders. Another crucial aspect of model translation is communicating the range of plausible outcomes instead of point predictions only. Modellers should clearly communicate uncertainty and translate statistical concepts into formats that are interpretable by stakeholders and the public. For example, the 50% and 95% CIs shown on the COVID-19 Forecast Hub often include both upward and downward trends. Without additional explanation, these confidence intervals can be difficult to interpret. One alternative might be for modellers to provide the percentage chance that the trend will be increasing, flat, or decreasing. Clearer communication of uncertainty can build trust in modelling and prevent misuse of models. An important barrier to successful translation of models is the current state of research dissemination. Within 10 months of the first confirmed case, 125 000 COVID-19 scientific articles were shared, with 30 000 of them on preprint servers. Preprints excel at quickly sharing new research but do not have the quality assurance traditionally provided by peer review. Additionally, there is evidence that preprints can be misused in harmful ways to spread extremist ideologies and misleading medical information. Some of these harms might be mitigated by more transparent reporting on the limitations and proper interpretations of models. However, even within the scientific community, the sheer volume of information obstructs efficient synthesis of the literature to establish best practices. Efforts to address some of these problems exist, such as recruiting researchers to conduct rapid and publicly available reviews of papers. Nevertheless, these disparate efforts (including informal reviews on social media) still leave information scattered and difficult to synthesise. We need to strike a balance between publishing speed and quality, implement safeguards to prevent research from being misused, and develop a more organised, centralised way to vet and disseminate timely information. Although COVID-19 forecasting and public health responses have been heavily dependent on partnerships with academic research teams, university-based modellers face considerable barriers when choosing to engage in crucial, but time-consuming, translational work—eg, building, maintaining, and communicating modelling results. Extant incentive structures do not recognise these efforts, and instead reward traditional forms of academic achievement (eg, peer-reviewed publications and secured grant funding). The value of this type of translational work needs to be recognised and elevated to continue the academic community's engagement in real-time outbreak mitigation and maximise its impact. Establishing prestigious awards for outstanding work of this kind and encouraging journals to focus on effective messaging during times of crisis could encourage more publications to focus on these essential efforts, and more universities to recognise and reward academics accordingly. For more on the translating data in a pandemic Series see www.thelancet.com/series/translating-data-in-a-pandemic ECL received payment for expert testimony from Cohen Ziffer Frenchman & McKenna for a report related to COVID-19 epidemiology. KN, SJ, and LG submitted a model to the COVID-19 Forecast Hub. NGR is a coauthor on EPIFORGE 2020 model reporting guidelines, a codirector of the Forecast Hub, has submitted individual models to the Forecast Hub, and served in an advisory role for the US Scenario Modeling Hub. ST is a cofounder and member of leadership team for the US Scenario Modeling Hub and has submitted individual models to both the Scenario and Forecast Hub. ECL has submitted models to both the Forecast and Scenario Hub. KN, SJ, MM, and LG were funded by the National Science Foundation (NSF) Rapid Response Research grants (2108526 and 2028604) and the Centers for Disease Control and Prevention (CDC) SHEPheRD Project (200-2016-91781). NGR has been supported by the CDC (1U01IP001122) and the National Institutes of General Medical Sciences (NIGMS; R35GM119582). The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIGMS or the National Institutes of Health. KG and FP were funded by the Society for Medical Decision Making Covid Modeling Accelerator and the CDC (U01CK000589). ST has been supported by the NSF (2127976) and the CDC SHEPheRD Project (200-2016-91781). The funders of the study had no role in the conceptualisation or writing of the Comment.
  9 in total

1.  The evolving role of preprints in the dissemination of COVID-19 research and their impact on the science communication landscape.

Authors:  Nicholas Fraser; Liam Brierley; Gautam Dey; Jessica K Polka; Máté Pálfy; Federico Nanni; Jonathon Alexis Coates
Journal:  PLoS Biol       Date:  2021-04-02       Impact factor: 8.029

2.  A need for open public data standards and sharing in light of COVID-19.

Authors:  Lauren Gardner; Jeremy Ratcliff; Ensheng Dong; Aaron Katz
Journal:  Lancet Infect Dis       Date:  2020-08-10       Impact factor: 25.071

3.  Evaluation of individual and ensemble probabilistic forecasts of COVID-19 mortality in the United States.

Authors:  Estee Y Cramer; Evan L Ray; Velma K Lopez; Johannes Bracher; Andrea Brennen; Alvaro J Castro Rivadeneira; Aaron Gerding; Tilmann Gneiting; Katie H House; Yuxin Huang; Dasuni Jayawardena; Abdul H Kanji; Ayush Khandelwal; Khoa Le; Anja Mühlemann; Jarad Niemi; Apurv Shah; Ariane Stark; Yijin Wang; Nutcha Wattanachit; Martha W Zorn; Youyang Gu; Sansiddh Jain; Nayana Bannur; Ayush Deva; Mihir Kulkarni; Srujana Merugu; Alpan Raval; Siddhant Shingi; Avtansh Tiwari; Jerome White; Neil F Abernethy; Spencer Woody; Maytal Dahan; Spencer Fox; Kelly Gaither; Michael Lachmann; Lauren Ancel Meyers; James G Scott; Mauricio Tec; Ajitesh Srivastava; Glover E George; Jeffrey C Cegan; Ian D Dettwiller; William P England; Matthew W Farthing; Robert H Hunter; Brandon Lafferty; Igor Linkov; Michael L Mayo; Matthew D Parno; Michael A Rowland; Benjamin D Trump; Yanli Zhang-James; Samuel Chen; Stephen V Faraone; Jonathan Hess; Christopher P Morley; Asif Salekin; Dongliang Wang; Sabrina M Corsetti; Thomas M Baer; Marisa C Eisenberg; Karl Falb; Yitao Huang; Emily T Martin; Ella McCauley; Robert L Myers; Tom Schwarz; Daniel Sheldon; Graham Casey Gibson; Rose Yu; Liyao Gao; Yian Ma; Dongxia Wu; Xifeng Yan; Xiaoyong Jin; Yu-Xiang Wang; YangQuan Chen; Lihong Guo; Yanting Zhao; Quanquan Gu; Jinghui Chen; Lingxiao Wang; Pan Xu; Weitong Zhang; Difan Zou; Hannah Biegel; Joceline Lega; Steve McConnell; V P Nagraj; Stephanie L Guertin; Christopher Hulme-Lowe; Stephen D Turner; Yunfeng Shi; Xuegang Ban; Robert Walraven; Qi-Jun Hong; Stanley Kong; Axel van de Walle; James A Turtle; Michal Ben-Nun; Steven Riley; Pete Riley; Ugur Koyluoglu; David DesRoches; Pedro Forli; Bruce Hamory; Christina Kyriakides; Helen Leis; John Milliken; Michael Moloney; James Morgan; Ninad Nirgudkar; Gokce Ozcan; Noah Piwonka; Matt Ravi; Chris Schrader; Elizabeth Shakhnovich; Daniel Siegel; Ryan Spatz; Chris Stiefeling; Barrie Wilkinson; Alexander Wong; Sean Cavany; Guido España; Sean Moore; Rachel Oidtman; Alex Perkins; David Kraus; Andrea Kraus; Zhifeng Gao; Jiang Bian; Wei Cao; Juan Lavista Ferres; Chaozhuo Li; Tie-Yan Liu; Xing Xie; Shun Zhang; Shun Zheng; Alessandro Vespignani; Matteo Chinazzi; Jessica T Davis; Kunpeng Mu; Ana Pastore Y Piontti; Xinyue Xiong; Andrew Zheng; Jackie Baek; Vivek Farias; Andreea Georgescu; Retsef Levi; Deeksha Sinha; Joshua Wilde; Georgia Perakis; Mohammed Amine Bennouna; David Nze-Ndong; Divya Singhvi; Ioannis Spantidakis; Leann Thayaparan; Asterios Tsiourvas; Arnab Sarker; Ali Jadbabaie; Devavrat Shah; Nicolas Della Penna; Leo A Celi; Saketh Sundar; Russ Wolfinger; Dave Osthus; Lauren Castro; Geoffrey Fairchild; Isaac Michaud; Dean Karlen; Matt Kinsey; Luke C Mullany; Kaitlin Rainwater-Lovett; Lauren Shin; Katharine Tallaksen; Shelby Wilson; Elizabeth C Lee; Juan Dent; Kyra H Grantz; Alison L Hill; Joshua Kaminsky; Kathryn Kaminsky; Lindsay T Keegan; Stephen A Lauer; Joseph C Lemaitre; Justin Lessler; Hannah R Meredith; Javier Perez-Saez; Sam Shah; Claire P Smith; Shaun A Truelove; Josh Wills; Maximilian Marshall; Lauren Gardner; Kristen Nixon; John C Burant; Lily Wang; Lei Gao; Zhiling Gu; Myungjin Kim; Xinyi Li; Guannan Wang; Yueying Wang; Shan Yu; Robert C Reiner; Ryan Barber; Emmanuela Gakidou; Simon I Hay; Steve Lim; Chris Murray; David Pigott; Heidi L Gurung; Prasith Baccam; Steven A Stage; Bradley T Suchoski; B Aditya Prakash; Bijaya Adhikari; Jiaming Cui; Alexander Rodríguez; Anika Tabassum; Jiajia Xie; Pinar Keskinocak; John Asplund; Arden Baxter; Buse Eylul Oruc; Nicoleta Serban; Sercan O Arik; Mike Dusenberry; Arkady Epshteyn; Elli Kanal; Long T Le; Chun-Liang Li; Tomas Pfister; Dario Sava; Rajarishi Sinha; Thomas Tsai; Nate Yoder; Jinsung Yoon; Leyou Zhang; Sam Abbott; Nikos I Bosse; Sebastian Funk; Joel Hellewell; Sophie R Meakin; Katharine Sherratt; Mingyuan Zhou; Rahi Kalantari; Teresa K Yamana; Sen Pei; Jeffrey Shaman; Michael L Li; Dimitris Bertsimas; Omar Skali Lami; Saksham Soni; Hamza Tazi Bouardi; Turgay Ayer; Madeline Adee; Jagpreet Chhatwal; Ozden O Dalgic; Mary A Ladd; Benjamin P Linas; Peter Mueller; Jade Xiao; Yuanjia Wang; Qinxia Wang; Shanghong Xie; Donglin Zeng; Alden Green; Jacob Bien; Logan Brooks; Addison J Hu; Maria Jahja; Daniel McDonald; Balasubramanian Narasimhan; Collin Politsch; Samyak Rajanala; Aaron Rumack; Noah Simon; Ryan J Tibshirani; Rob Tibshirani; Valerie Ventura; Larry Wasserman; Eamon B O'Dea; John M Drake; Robert Pagano; Quoc T Tran; Lam Si Tung Ho; Huong Huynh; Jo W Walker; Rachel B Slayton; Michael A Johansson; Matthew Biggerstaff; Nicholas G Reich
Journal:  Proc Natl Acad Sci U S A       Date:  2022-04-08       Impact factor: 12.779

4.  The Use and Misuse of Mathematical Modeling for Infectious Disease Policymaking: Lessons for the COVID-19 Pandemic.

Authors:  Lyndon P James; Joshua A Salomon; Caroline O Buckee; Nicolas A Menzies
Journal:  Med Decis Making       Date:  2021-02-03       Impact factor: 2.583

Review 5.  Coordinated Strategy for a Model-Based Decision Support Tool for Coronavirus Disease, Utah, USA.

Authors:  Hannah R Meredith; Emerson Arehart; Kyra H Grantz; Alexander Beams; Theresa Sheets; Richard Nelson; Yue Zhang; Russell G Vinik; Darryl Barfuss; Jacob C Pettit; Keegan McCaffrey; Angela C Dunn; Michael Good; Shannon Frattaroli; Matthew H Samore; Justin Lessler; Elizabeth C Lee; Lindsay T Keegan
Journal:  Emerg Infect Dis       Date:  2021-05       Impact factor: 6.883

6.  The US COVID-19 Trends and Impact Survey: Continuous real-time measurement of COVID-19 symptoms, risks, protective behaviors, testing, and vaccination.

Authors:  Joshua A Salomon; Alex Reinhart; Alyssa Bilinski; Eu Jing Chua; Wichada La Motte-Kerr; Minttu M Rönn; Marissa B Reitsma; Katherine A Morris; Sarah LaRocca; Tamer H Farag; Frauke Kreuter; Roni Rosenfeld; Ryan J Tibshirani
Journal:  Proc Natl Acad Sci U S A       Date:  2021-12-21       Impact factor: 12.779

7.  Recommended reporting items for epidemic forecasting and prediction research: The EPIFORGE 2020 guidelines.

Authors:  Simon Pollett; Michael A Johansson; Nicholas G Reich; David Brett-Major; Sara Y Del Valle; Srinivasan Venkatramanan; Rachel Lowe; Travis Porco; Irina Maljkovic Berry; Alina Deshpande; Moritz U G Kraemer; David L Blazes; Wirichada Pan-Ngum; Alessandro Vespigiani; Suzanne E Mate; Sheetal P Silal; Sasikiran Kandula; Rachel Sippy; Talia M Quandelacy; Jeffrey J Morgan; Jacob Ball; Lindsay C Morton; Benjamin M Althouse; Julie Pavlin; Wilbert van Panhuis; Steven Riley; Matthew Biggerstaff; Cecile Viboud; Oliver Brady; Caitlin Rivers
Journal:  PLoS Med       Date:  2021-10-19       Impact factor: 11.069

8.  Global landscape of SARS-CoV-2 genomic surveillance and data sharing.

Authors:  Zhiyuan Chen; Andrew S Azman; Xinhua Chen; Junyi Zou; Yuyang Tian; Ruijia Sun; Xiangyanyu Xu; Yani Wu; Wanying Lu; Shijia Ge; Zeyao Zhao; Juan Yang; Daniel T Leung; Daryl B Domman; Hongjie Yu
Journal:  Nat Genet       Date:  2022-03-28       Impact factor: 38.330

9.  Open science saves lives: lessons from the COVID-19 pandemic.

Authors:  Lonni Besançon; Nathan Peiffer-Smadja; Corentin Segalas; Haiting Jiang; Paola Masuzzo; Cooper Smout; Eric Billy; Maxime Deforet; Clémence Leyrat
Journal:  BMC Med Res Methodol       Date:  2021-06-05       Impact factor: 4.615

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.