Simon Pollett1, Michael Johansson2, Matthew Biggerstaff3, Lindsay C Morton4, Sara L Bazaco5, David M Brett Major6, Anna M Stewart-Ibarra7, Julie A Pavlin8, Suzanne Mate9, Rachel Sippy10, Laurie J Hartman11, Nicholas G Reich12, Irina Maljkovic Berry13, Jean-Paul Chretien14, Benjamin M Althouse15, Diane Myer16, Cecile Viboud17, Caitlin Rivers18. 1. Viral Diseases Branch, Walter Reed Army Institute of Research, MD, USA. Electronic address: simon.d.pollett.ctr@mail.mil. 2. Division of Vector-Borne Diseases, Centers for Disease Control & Prevention, San Juan, Puerto Rico, USA. 3. Influenza Division, Centers for Disease Control & Prevention, GA, USA. 4. Global Emerging Infections Surveillance, Armed Forces Health Surveillance Division, Silver Spring, MD, USA; Cherokee Nation Strategic Programs, Tulsa, OK, USA; Milken Institute School of Public Health, The George Washington University, Washington, DC, USA. 5. Global Emerging Infections Surveillance, Armed Forces Health Surveillance Division, Silver Spring, MD, USA; General Dynamics Information Technology, Falls Church, VA, USA. 6. College of Public Health, University of Nebraska Medical Center, Omaha, NE. 7. Institute for Global Health and Translational Science, State University of New York Upstate Medical University, Syracuse, NY, USA; InterAmerican Institute for Global Change Research (IAI), Montevideo, Department of Montevideo, Uruguay. 8. National Academies of Sciences, Engineering, and Medicine, DC, USA. 9. Emerging Infectious Diseases Branch, Walter Reed Army Institute of Research, MD, USA. 10. Institute for Global Health and Translational Science, State University of New York Upstate Medical University, Syracuse, NY, USA. 11. Global Emerging Infections Surveillance, Armed Forces Health Surveillance Division, Silver Spring, MD, USA; Cherokee Nation Strategic Programs, Tulsa, OK, USA. 12. University of Massachusetts at Amherst, MA, USA. 13. Viral Diseases Branch, Walter Reed Army Institute of Research, MD, USA. 14. Department of Defense, MD, USA. 15. University of Washington, WA, USA; Institute for Disease Modeling, Bellevue, WA, USA; New Mexico State University, Las Cruces, NM, USA. 16. Johns Hopkins Center for Health Security, MD, USA. 17. Fogarty International Center, National Institutes of Health, MD, USA. 18. Johns Hopkins Center for Health Security, MD, USA. Electronic address: crivers6@jhu.edu.
Abstract
INTRODUCTION: High quality epidemic forecasting and prediction are critical to support response to local, regional and global infectious disease threats. Other fields of biomedical research use consensus reporting guidelines to ensure standardization and quality of research practice among researchers, and to provide a framework for end-users to interpret the validity of study results. The purpose of this study was to determine whether guidelines exist specifically for epidemic forecast and prediction publications. METHODS: We undertook a formal systematic review to identify and evaluate any published infectious disease epidemic forecasting and prediction reporting guidelines. This review leveraged a team of 18 investigators from US Government and academic sectors. RESULTS: A literature database search through May 26, 2019, identified 1467 publications (MEDLINE n = 584, EMBASE n = 883), and a grey-literature review identified a further 407 publications, yielding a total 1777 unique publications. A paired-reviewer system screened in 25 potentially eligible publications, of which two were ultimately deemed eligible. A qualitative review of these two published reporting guidelines indicated that neither were specific for epidemic forecasting and prediction, although they described reporting items which may be relevant to epidemic forecasting and prediction studies. CONCLUSIONS: This systematic review confirms that no specific guidelines have been published to standardize the reporting of epidemic forecasting and prediction studies. These findings underscore the need to develop such reporting guidelines in order to improve the transparency, quality and implementation of epidemic forecasting and prediction research in operational public health.
INTRODUCTION: High quality epidemic forecasting and prediction are critical to support response to local, regional and global infectious disease threats. Other fields of biomedical research use consensus reporting guidelines to ensure standardization and quality of research practice among researchers, and to provide a framework for end-users to interpret the validity of study results. The purpose of this study was to determine whether guidelines exist specifically for epidemic forecast and prediction publications. METHODS: We undertook a formal systematic review to identify and evaluate any published infectious disease epidemic forecasting and prediction reporting guidelines. This review leveraged a team of 18 investigators from US Government and academic sectors. RESULTS: A literature database search through May 26, 2019, identified 1467 publications (MEDLINE n = 584, EMBASE n = 883), and a grey-literature review identified a further 407 publications, yielding a total 1777 unique publications. A paired-reviewer system screened in 25 potentially eligible publications, of which two were ultimately deemed eligible. A qualitative review of these two published reporting guidelines indicated that neither were specific for epidemic forecasting and prediction, although they described reporting items which may be relevant to epidemic forecasting and prediction studies. CONCLUSIONS: This systematic review confirms that no specific guidelines have been published to standardize the reporting of epidemic forecasting and prediction studies. These findings underscore the need to develop such reporting guidelines in order to improve the transparency, quality and implementation of epidemic forecasting and prediction research in operational public health.
Epidemic forecasting and prediction are a critical biomedical research enterprise with major public and global health relevance (Rivers et al., 2019; Polonsky et al., 2019). Forecasting and prediction of epidemiological phenomena has offered critical insights into recent outbreaks, including those caused by Ebola virus, Zika virus, chikungunya virus, and pandemic influenza viruses (Worden et al., 2019; Perkins et al., 2016; Del Valle et al., 2018; Kobres et al., 2019; Keegan et al., 2017; Nsoesie et al., 2014). During recent outbreaks of Ebola, for instance, modeling research has predicted short and longer term case count trajectories (Worden et al., 2019), has estimated the impact of violence on outbreak growth and control (Wannier et al., 2019), has predicted the effectiveness of non-pharmaceutical and vaccine countermeasures (Merler et al., 2016), and has quantified the risk of international spread (Gomes et al., 2014).While the terms ‘forecasting’ and ‘prediction’ are often conflated and heterogeneously defined, forecasting research typically offers quantitative statements about an event, outcome, or trend that has not yet been observed, conditional on data that has been observed. In the context of infectious disease epidemics, this often refers to short- to mid-term projections of disease incidence, and related targets, such as the timing of peak incidence. Such forecasts can predict epidemic growth, spatial spread, peak and total case burden, mortality, and morbidity in ways relevant to resource management (Rivers et al., 2019; Kobres et al., 2019). The term ‘prediction’ is more broadly and loosely used in epidemiological research, and may refer to models that examine the mechanistic drivers of epidemiological characteristics, such as human mobility, population immunity, contact patterns, public health interventions and climatic factors (Perkins et al., 2016; Ewing et al., 2017), as well as studies that estimate epidemiological characteristics with inherent forecasting value, such as R0 (Kobres et al., 2019). Forecasting research often uses data from these and other covariates. Epidemic forecasting and prediction is not limited to pandemics, the approaches also enhance routine preparedness for seasonal communicable diseases such as non-pandemic influenza and dengue viruses (Reich et al., 2019; Spreco et al., 2018; Debellut et al., 2018; Lauer et al., 2018; Lowe et al., 2018, 2017). In this manuscript, we refer to this collective body of research as “epidemic forecasting and prediction research”.Many fields of biomedical research use consensus reporting guidelines to promote standardization and improve quality of research practice. These reporting guidelines also provide a framework for end-users to interpret the validity of such research approaches and findings. The Enhancing the Quality and Transparency of Health Research (EQUATOR) network refers to biomedical research reporting guidelines as “simple, structured tools for health researchers to use while writing manuscripts” (Anon, 2019a). Rather than providing guidance on how to perform research, they enumerate what should be reported in publications to “ensure a manuscript can be, for example, understood by a reader or replicated by a researcher” (Anon, 2019b). In the case of epidemic forecasting, such readers may include those in operational public health (such as government health officials), epidemic model developers who may seek to reproduce or leverage the modeling methods presented in other studies, or the mainstream media (reporting to the general public on an epidemic). The EQUATOR consortium further defines a reporting guideline as “a checklist, flow diagram, or structured text to guide authors in reporting a specific type of research, developed using explicit methodology” (Anon, 2019b). The latter emphasis on explicit methodology for creating guidelines requires a structured, reproducible, consensus process that is specifically described a priori (Moher et al., 2010). Prominent reporting guidelines include the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement (Anon, 2019c), the Consolidating Standards of Reporting Trials (CONSORT) statement (Anon, 2019d), the Standards for Reporting of Diagnostic Accuracy Studies (STARD) guidelines (Cohen et al., 2016), and the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) guidelines (Anon, 2019e). The development of these reporting guidelines has been shown to be effective in improving the quality and clarity of academic publications. As an example, reporting of clinical trials improved after the introduction of the CONSORT guidelines (Hopewell et al., 2012).To our knowledge, no current reporting guideline exists for epidemic forecasting and prediction research. Development of a comprehensive epidemic forecasting and prediction research reporting guideline may ultimately lead to improvements in: i) the consistency of reporting, ii) the reproducibility of results, iii) the quality of practice, and iv) the transparency of research. Underscoring the need for such reporting standardization, a recent evaluation of Zika epidemic forecasting and prediction studies found that there was substantial heterogeneity in the reporting of study methods, uncertainties (assessed through reporting of uncertainty intervals), data, and other critical information (e.g. clear and accurate display of model output) which would be needed to completely understand and replicate the work (Kobres et al., 2019).An essential first step in efforts to develop epidemic forecasting reporting guidelines is performing a systematic review, which follows reporting health-research guideline development best-practice (Moher et al., 2010, Anon, 2019c). We therefore undertook a systematic review to (i) identify all published epidemic forecasting or prediction reporting guidelines; and (ii) qualitatively evaluate their strengths, limitations and suitability of any guidelines found for the field of epidemic forecasting and prediction research.
Materials and methods
We followed the PRISMA statement to conduct this systematic review (Table S1) (Anon, 2019c). An a priori systematic review protocol was developed and agreed upon by the entire review team (n = 18) made up of members from U.S. Government and academia before the review commenced. Our protocol was not published in the International Prospective Register of Systematic Reviews (PROSPERO), but is publicly available at https://github.com/cmrivers?tab=repositories (Anon, 2019g).
Search strategy
MEDLINE and EMBASE electronic databases were searched through May 26, 2019, using the following search ontology: “[epidemic OR outbreak OR influenza OR Ebola OR Zika OR SARS OR Chikungunya OR MERS OR pathogen OR pandemic OR virus OR viruses] AND [forecasting OR prediction OR modeling OR modelling] AND guidelines”. This search ontology was not restricted to the title or abstract. The pathogen-specific terms (e.g “Zika”, “virus”) were included to capture recent major outbreaks and epidemics, but as Boolean [OR] operators these would not have restricted the search to just these pathogens or pathogen-categories.In order to identify the relevant grey literature: (i) leading experts in the field of epidemic modeling and forecasting were contacted through an epidemic model implementation working group, and (ii) the EQUATOR website was reviewed for any existing epidemic forecasting guidelines which had been published (Anon, 2019a).
Eligibility criteria
Our inclusion criterion consisted of any publication which proposes a set of reporting guidelines for epidemic forecasting or prediction research. Our exclusion criteria were as follows:Non-communicable disease modeling reporting guidelinesPublications which proposed how to perform epidemic modeling studies (rather than how to report them)Modeling reporting guidelines which were not specific to epidemic forecasting and predictionNarrative review articlesPerspective piecesEditorialsDuplicate studiesDescriptive or analytic epidemiological publicationsClinical management or diagnostic guidelines
Study screening and eligibility determination
Literature review results were divided and assigned to 10 unique reviewer-pairs across 18 investigators from U.S. Government and academia. Both reviewers in each pair independently screened the titles and abstracts for potential eligibility in a citation manager software. The two reviewers came to a consensus if their screened-in article short-lists differed, and a third-party adjudicator decided if the pairs were unable to reach a consensus on screening in articles. For articles that made the second-round review (i.e. the screened-in studies), the reviewer-pairs repeated the screening and consensus process with the full text of the article to determine eligibility. Reasons for excluding the study/article were documented. A third independent reviewer adjudicated when reviewers were unable to reach a consensus on the final eligibility of any study.
Data collection process and data items
Eligible articles were qualitatively described by the reviewer pair in conjunction with a 3rd reviewer.
Results
Our literature database search identified 1467 publications (MEDLINE n = 584, EMBASE n = 883). A search of the EQUATOR website and discussions with experts identified a further 405 and 2 publications, respectively (Fig. 1). Of these 1777 unique publications, 25 were screened-in through the first review of title and abstract for further consideration (Fig. 1). Two of those publications were ultimately deemed eligible by the paired-reviewer consensus process through full text review (Eddy et al., 2012; Field et al., 2014). A further qualitative review of these eligible publications by Eddy et al. and Field et al. found that they were both limited in their guideline specificity and applicability to epidemic forecasting and prediction reporting (Eddy et al., 2012; Field et al., 2014).
Fig. 1.
PRISMA flow chart.
The first, by Eddy et al., includes a set of recommendations for medical decision-making model transparency in 2012 (Eddy et al., 2012). The major rationale for these guidelines offered by the authors is that “trust and confidence are critical to the success of health care models” and can be achieved with transparency and validation. These authors derived the guidelines through an iterative, structured process with individuals (model users and model developers) voting on draft recommendations, which were made available for further comment. However, these recommendations stop short of formal guidelines, were designed to be applicable to the broad category medical decision-making models, were not specifically tailored to epidemic forecasting, and do not encompass all of the uses relevant to epidemics (such as guidance on the reporting of prospective forecasts and their method of validation or the documentation of the source of epidemic case count data). Further, the recommendations cover both model reporting as well as the conduct of model validation.Nevertheless, several aspects of these recommendations were found to be potentially relevant to epidemic forecasting and prediction research. The authors call for modeling results to be transparent with sufficient non-technical documentation to be accessible to any interested reader. Items suggested to be included in the non-technical section included: the purpose of the model, the model data sources, study funding sources, a graphical representation of the model components, model inputs, model outputs, effects of uncertainty, the potential applications of the model, and limitations of its intended applications (Eddy et al., 2012).Eddy et al. also call for extended technical model documentation to allow full replication by others with sufficient modeling expertise. Yet they highlight the challenges of providing such technical documentation in full, including intellectual property concerns, dynamic changes to the model components over time, and the need for appropriate expertise to interpret technical documentation (particularly model code) (Eddy et al., 2012). The authors suggest work-around solutions such as running code and providing model output to others upon request, and/or making full technical model documentation available upon private request rather than providing complete code in the public domain (Eddy et al., 2012).The second candidate guidelines by Field et al. are an extension of the STROBE guidelines for molecular epidemiology and seek to “improve the reporting of studies and, in turn, to assist interpretation of the data and increase understanding of what was actually done by researchers” (Field et al., 2014). These guidelines, named ‘STROBE for Molecular Epidemiology’ (STROME), are not explicitly aimed at epidemic forecasting and prediction research. Rather, they apply to a wide range of molecular epidemiology studies ranging from descriptive clonal typing to advanced phylodynamics (Field et al., 2014). Nevertheless, these recommendations also offered several reporting items of potential relevance to epidemic forecasting. These relevant items include (i) explicit description of case definitions, (ii) documentation of sampling methods, (iii) documentation of the study time frame, (iv) description of data sources and related laboratory diagnostic methods, (v) description of missing data, (vi) documentation of relevant ethics approvals, (vii) evaluating consistency of findings between different lines of evidence, (viii) description of the study objectives, and (ix) acknowledgement of case ascertainment bias and non-independence, if present. The development of the STROME guidelines also followed a structured, iterative process of guideline development. Consecutive versions of the guidelines were circulated to reach a consensus on content, and incorporated a range of stakeholders and complementary expertise from multiple countries, fields, and sectors (Field et al., 2014).In addition to the review of these two eligible articles, the screening process of our systematic review noted several publications that sought to standardize good practices for biomedical modeling conduct (more broadly than forecasting and prediction). In 2011 the International Society for Pharmacoeconomics and Outcomes Research and the Society for Medical Decision Making (ISPOR-SMDM) Modeling Good Research Practices Task Force generated a set of “optimal practices that all models should strive toward” (Caro et al., 2012). This consortium derived a series of modeling good practice recommendations covering model conceptualization, event simulation, dynamic transmission modeling, model parameter estimation and uncertainty analysis (Caro et al., 2012). These were not eligible in our review because they pertain to modeling practice rather than model reporting. However, they may be of general interest to the epidemic modeling community and we cite them here (Caro et al., 2012; Briggs et al., 2012; Pitman et al., 2012). Similar modeling practice guidelines for health technology assessment and disaster response modeling were also noted (Brandeau et al., 2009; Dahabreh et al., 2008).
Conclusions
While our systematic review identified two eligible manuscripts that highlighted important features of modeling studies by consensus (Eddy et al., 2012; Field et al., 2014), neither publication ultimately described reporting guidelines specific to epidemic forecasting and prediction. This systematic review, therefore, confirms that no published recommendations exist to standardize the reporting of epidemic forecasting and prediction studies. This is in contrast to multiple other biomedical research fields which have clear standards in study reporting, many of which are endorsed and enforced by biomedical journals (Anon, 2019a, h).One potential limitation of this systematic review is that, while it did include two databases (MEDLINE and EMBASE), it did not include others such as the Web of Science or SciELO. Including this latter database in particular may have mitigated another limitation of performing searches with English search terms only, although we did not restrict the MEDLINE and EMBASE database searches by language and MEDLINE does identify English-translated abstracts which often accompany non-English articles (Anon, 2020). A final limitation was that we did not explicitly search for reporting guidelines which exist for non-medical forecasting. For example, these may be a feature of weather forecasting and indeed infectious disease forecasters have often looked to the field of weather forecasting for lessons on how to best implement their research (Viboud and Vespignani, 2019).To redress the lack of appropriate reporting standards for epidemic forecasting and prediction (Kobres et al., 2019), we have now launched the Epidemic Forecasting and Reporting Guidelines (EPIFORGE) initiative (Anon, 2019f). The EPIFORGE initiative aims to develop guidance on how to report epidemic forecasting and prediction studies (not how to perform such studies) (Anon, 2019f). Broadly, EPIFORGE aims to improve the consistency of epidemic forecasting reporting, and thereby forecasting reproducibility, quality, and transparency. The systematic review presented here was an important step of the EPIFORGE process as it confirms the lack of a suitable existing guideline to meet the needs of this field (Moher et al., 2010). The results of this effort are expected in Spring 2020, and the EPIFORGE reporting checklist development process has so far examined items within the domains of reproducibility, transparency, validity, interpretability, funding and sponsorship.Further, this systematic review has provided valuable reference materials for the EPIFORGE guideline development process (Anon, 2019f). While not specific to epidemic forecasting, the model reporting recommendations by Eddy et al. and Field et al. have prompted consideration of case definitions, laboratory methods, code sharing, study time-frames, missing data, model applications, funding sources, model structure, and bias in the evolving EPIFORGE guidelines (Eddy et al., 2012; Field et al., 2014). The guideline development principles used by Eddy et al. and Field et al. have also been adopted into the EPIFORGE methods which use a structured, iterative consensus process (i.e. a three-round Delphi process) across a range of model developers and model users from multiple sectors and multiple countries. These methods follow best-practice guidance for health research reporting guideline development (Anon, 2019b; Moher et al., 2010). Such an approach is critical to maximizing the quality, acceptability, and eventual implementation of the final EPIFORGE recommendations and improve the quality, transparency, and reproducibility of forecasting/prediction practice. (Eddy et al., 2012; Field et al., 2014).
Authors: Richard Pitman; David Fisman; Gregory S Zaric; Maarten Postma; Mirjam Kretzschmar; John Edmunds; Marc Brisson Journal: Med Decis Making Date: 2012 Sep-Oct Impact factor: 2.583
Authors: Stefano Merler; Marco Ajelli; Laura Fumanelli; Stefano Parlamento; Ana Pastore Y Piontti; Natalie E Dean; Giovanni Putoto; Dante Carraro; Ira M Longini; M Elizabeth Halloran; Alessandro Vespignani Journal: PLoS Negl Trop Dis Date: 2016-11-02
Authors: Jonathan A Polonsky; Amrish Baidjoe; Zhian N Kamvar; Anne Cori; Kara Durski; W John Edmunds; Rosalind M Eggo; Sebastian Funk; Laurent Kaiser; Patrick Keating; Olivier le Polain de Waroux; Michael Marks; Paula Moraga; Oliver Morgan; Pierre Nouvellet; Ruwan Ratnayake; Chrissy H Roberts; Jimmy Whitworth; Thibaut Jombart Journal: Philos Trans R Soc Lond B Biol Sci Date: 2019-07-08 Impact factor: 6.237
Authors: S Rae Wannier; Lee Worden; Nicole A Hoff; Eduardo Amezcua; Bernice Selo; Cyrus Sinai; Mathias Mossoko; Bathe Njoloko; Emile Okitolonda-Wemakoy; Placide Mbala-Kingebeni; Steve Ahuka-Mundeke; Jean Jacques Muyembe-Tamfum; Eugene T Richardson; George W Rutherford; James H Jones; Thomas M Lietman; Anne W Rimoin; Travis C Porco; J Daniel Kelly Journal: Epidemics Date: 2019-07-26 Impact factor: 4.396
Authors: Nicholas G Reich; Logan C Brooks; Spencer J Fox; Sasikiran Kandula; Craig J McGowan; Evan Moore; Dave Osthus; Evan L Ray; Abhinav Tushar; Teresa K Yamana; Matthew Biggerstaff; Michael A Johansson; Roni Rosenfeld; Jeffrey Shaman Journal: Proc Natl Acad Sci U S A Date: 2019-01-15 Impact factor: 11.205
Authors: Pei-Ying Kobres; Jean-Paul Chretien; Michael A Johansson; Jeffrey J Morgan; Pai-Yei Whung; Harshini Mukundan; Sara Y Del Valle; Brett M Forshey; Talia M Quandelacy; Matthew Biggerstaff; Cecile Viboud; Simon Pollett Journal: PLoS Negl Trop Dis Date: 2019-10-04
Authors: Simon Pollett; Michael A Johansson; Nicholas G Reich; David Brett-Major; Sara Y Del Valle; Srinivasan Venkatramanan; Rachel Lowe; Travis Porco; Irina Maljkovic Berry; Alina Deshpande; Moritz U G Kraemer; David L Blazes; Wirichada Pan-Ngum; Alessandro Vespigiani; Suzanne E Mate; Sheetal P Silal; Sasikiran Kandula; Rachel Sippy; Talia M Quandelacy; Jeffrey J Morgan; Jacob Ball; Lindsay C Morton; Benjamin M Althouse; Julie Pavlin; Wilbert van Panhuis; Steven Riley; Matthew Biggerstaff; Cecile Viboud; Oliver Brady; Caitlin Rivers Journal: PLoS Med Date: 2021-10-19 Impact factor: 11.069