| Literature DB >> 31422772 |
Saskia den Boon1, Mark Jit2,3,4, Marc Brisson5, Graham Medley2, Philippe Beutels6, Richard White2,7,8, Stefan Flasche7, T Déirdre Hollingsworth9, Tini Garske10, Virginia E Pitzer11, Martine Hoogendoorn12, Oliver Geffen10, Andrew Clark13, Jane Kim14, Raymond Hutubessy15.
Abstract
BACKGROUND: Despite the increasing popularity of multi-model comparison studies and their ability to inform policy recommendations, clear guidance on how to conduct multi-model comparisons is not available. Herein, we present guidelines to provide a structured approach to comparisons of multiple models of interventions against infectious diseases. The primary target audience for these guidelines are researchers carrying out model comparison studies and policy-makers using model comparison studies to inform policy decisions.Entities:
Keywords: Cost-effectiveness; Decision-making; Harmonisation; Impact modelling; Infectious diseases; Interventions; Mathematical modelling; Model comparisons; Policy
Mesh:
Year: 2019 PMID: 31422772 PMCID: PMC6699075 DOI: 10.1186/s12916-019-1403-9
Source DB: PubMed Journal: BMC Med ISSN: 1741-7015 Impact factor: 8.775
Principles and good practice statements for multi-model comparisons
| Principle | Good practice |
|---|---|
| 1. | • The policy question should be refined, operationalised and converted into a research question through an iterative process • Process and timelines should be defined in agreement with the policy question |
| 2. | • All models that can (be adapted to) answer the research question should be systematically identified, preferably through a combination of a systematic literature review and open call • Models should be selected using pre-specified inclusion and exclusion criteria, and models identified as potentially suitable but not included should be reported alongside their reason for non-participation • Models used and changes made as part of the comparison process should be well documented • If an internal or external validation was used to limit the model selection, it should be reported |
| 3. | • Developing a pre-specified protocol may be useful; if so, it could be published with the comparison results • Modellers should consider fitting models to a common setting or settings • Harmonisation of parameters governing the setting, disease, population and interventions should be considered whilst avoiding changes to fundamental model structures leading to model convergence |
| 4. | • Multiple scenarios should be explored to understand the drivers of the model results • Sensitivity analysis and what-if analyses (examining extreme scenarios) should be carried out |
| 5. | • The results for the individual models should be presented, along with within-model uncertainty ranges • Summary measures that combine outcomes of models should only be used if all outcomes support the same policy; it should be clearly communicated whether summary ranges include within-model uncertainty or between-model uncertainty (i.e. the range of point estimates across the model) |
| 6. | • Key results and their interpretation to policy questions should be discussed • Key strengths and limitations of the model comparison process and results should be addressed • Key recommendations for next steps should be reported |
Fig. 1Flow diagram for model identification and inclusion
Fig. 2The multi-model harmonisation and comparison process