| Literature DB >> 32993675 |
Ahmet Erdemir1,2, Lealem Mulugeta3,2, Joy P Ku4,2, Andrew Drach5,2, Marc Horner6,2, Tina M Morrison7,2, Grace C Y Peng8,2, Rajanikanth Vadigepalli9,2, William W Lytton10,2, Jerry G Myers11,12.
Abstract
The complexities of modern biomedicine are rapidly increasing. Thus, modeling and simulation have become increasingly important as a strategy to understand and predict the trajectory of pathophysiology, disease genesis, and disease spread in support of clinical and policy decisions. In such cases, inappropriate or ill-placed trust in the model and simulation outcomes may result in negative outcomes, and hence illustrate the need to formalize the execution and communication of modeling and simulation practices. Although verification and validation have been generally accepted as significant components of a model's credibility, they cannot be assumed to equate to a holistic credible practice, which includes activities that can impact comprehension and in-depth examination inherent in the development and reuse of the models. For the past several years, the Committee on Credible Practice of Modeling and Simulation in Healthcare, an interdisciplinary group seeded from a U.S. interagency initiative, has worked to codify best practices. Here, we provide Ten Rules for credible practice of modeling and simulation in healthcare developed from a comparative analysis by the Committee's multidisciplinary membership, followed by a large stakeholder community survey. These rules establish a unified conceptual framework for modeling and simulation design, implementation, evaluation, dissemination and usage across the modeling and simulation life-cycle. While biomedical science and clinical care domains have somewhat different requirements and expectations for credible practice, our study converged on rules that would be useful across a broad swath of model types. In brief, the rules are: (1) Define context clearly. (2) Use contextually appropriate data. (3) Evaluate within context. (4) List limitations explicitly. (5) Use version control. (6) Document appropriately. (7) Disseminate broadly. (8) Get independent reviews. (9) Test competing implementations. (10) Conform to standards. Although some of these are common sense guidelines, we have found that many are often missed or misconstrued, even by seasoned practitioners. Computational models are already widely used in basic science to generate new biomedical knowledge. As they penetrate clinical care and healthcare policy, contributing to personalized and precision medicine, clinical safety will require established guidelines for the credible practice of modeling and simulation in healthcare.Entities:
Keywords: Computational modeling; Computer modeling; Credibility; Healthcare; Reliability; Reproducibility; Simulation; Validation; Verification
Mesh:
Year: 2020 PMID: 32993675 PMCID: PMC7526418 DOI: 10.1186/s12967-020-02540-4
Source DB: PubMed Journal: J Transl Med ISSN: 1479-5876 Impact factor: 5.531
Fig. 1The research community events leading to the formation of the Committee on Credible Practice of Modeling and Simulation in Healthcare. The mission of the Interagency Modeling and Analysis Group and the Multiscale Modeling Consortium [9] is to share novel methodologies to cross spatial and temporal scales in biomedical, biological and behavioral systems, by promoting model reproducibility and reuse [26]. To achieve this goal, the end user must be first convinced to use each model through evaluating transparent credible practice rules for modeling and simulation, carried out by each modeler
The initial 26 proposed rules of good practice surveyed within the Committee
| Use version control | Use credible solvers | Explicitly list your limitations |
| Report appropriately | Document your code | Provide examples of use |
| Practice what you preach | Develop with the end user in mind | Attempt validation within context |
| Follow discipline-specific guidelines | Attempt verification within context | Attempt uncertainty (error) estimation |
| Make sure your results are reproducible | Define your evaluation metrics in advance | Conform to discipline-specific standards |
| Be a discipline-independent/ specific example | Learn from discipline-independent examples | Use appropriate data (input, validation, verification) |
| Define the context the model is intended to be used for | Perform appropriate level of sensitivity analysis within context of use | Use consistent terminology or define your terminology |
| Get it reviewed by independent users/developers/members | Provide user instructions whenever possible and applicable | Use traceable data that can be traced back to the origin |
| Disseminate whenever possible (source code, test suite, data, etc.) | Use competition of multiple implementations to check and balance each other |
The Committee’s Ten Rules of credible practice of modeling and simulation in healthcare
| Rule | Description | |
|---|---|---|
| 1. | Define context clearly | Develop and document the subject, purpose, and intended use(s) of the model or simulation |
| 2. | Use contextually appropriate data | Employ relevant and traceable information in the development or operation of a model or simulation |
| 3. | Evaluate within context | Perform verification, validation, uncertainty quantification, and sensitivity analysis of the model or simulation with respect to the reality of interest and intended use(s) of the model or simulation |
| 4. | List limitations explicitly | Provide restrictions, constraints, or qualifications for or on the use of the model or simulation for consideration by the users or customers of a model or simulation |
| 5. | Use version control | Implement a system to trace the time history of modeling and simulation activities including delineation of each contributors’ efforts |
| 6. | Document appropriately | Maintain up-to-date informative records of all modeling and simulation activities, including simulation code, model mark-up, scope and intended use of modeling and simulation activities, as well as users’ and developers’ guides |
| 7. | Disseminate broadly | Share all components of modeling and simulation activities, including simulation software, models, simulation scenarios and results |
| 8. | Get independent reviews | Have the modeling and simulation activity reviewed by nonpartisan third-party users and developers |
| 9. | Test competing implementations | Use contrasting modeling and simulation implementation strategies to check the conclusions of different strategies against each other |
| 10. | Conform to standards | Adopt and promote generally applicable and discipline specific operating procedures, guidelines, and regulations accepted as best practices |
Fig. 2Process for maintaining and evolving the Ten Rules for credible practice in model and simulations in healthcare at the time of the development of this manuscript. The Committee utilizes an iterative process to ensure the Ten Rules and its supporting materials remain relevant and useful. Government agencies have incorporated the Ten Rules into their funding solicitations to guide applicants on how to develop a credible practice plan [30–34]. Informal mechanisms (gray arrows), such as discussions with the funded investigators and program directors of these solicitations, provide invaluable feedback to incorporate into the Committee’s guidelines. Within the Interagency Modeling and Analysis group, funded investigators also submit semi-annual reports, which include updates on how their projects fulfill the Ten Rules (now available as a online form that can be continuously updated on the Interagency Modeling and Analysis Group wiki site [9]). Through this formal process (blue arrows), the Committee receives additional feedback for improving the Ten Rules and guidelines
Fig. 3Relation between Model and Simulation Domain of Use, Use Capacity and Strength of Influence. Model and Simulation developed for a specific Domain of Use will typically have the greatest Strength of Influence within a commensurate range of Use Capacity. It may, however, be able to provide inference data for other Use Capacity areas. For example, an modeling and simulation framework specifically intended for translational research (blue line) in pharmaceuticals is likely to have the highest Strength of Influence in therapeutics development (e.g. new drug development). Similarly, a highly vetted epidemiological modeling and simulation to analyze the long-term effect(s) of an FDA-approved vaccine on public health (red line) is likely to be most credible for informing healthcare policy and preventative therapeutics implementation. The Strength of Influence of these examples would likely differ should the Use Capacity involve applications related to regulatory approval, therapeutics development, and hypothesis testing