Literature DB >> 31722385

Explicit causal reasoning is needed to prevent prognostic models being victims of their own success.

Matthew Sperrin1, David Jenkins1, Glen P Martin1, Niels Peek1.   

Abstract

Entities:  

Mesh:

Year:  2019        PMID: 31722385      PMCID: PMC6857504          DOI: 10.1093/jamia/ocz197

Source DB:  PubMed          Journal:  J Am Med Inform Assoc        ISSN: 1067-5027            Impact factor:   4.497


× No keyword cloud information.
The recent perspective by Lenert et al provides an accessible and informative overview of the full life cycle of prognostic models, comprising development, deployment, maintenance, and surveillance. The perspective focuses particularly on the fundamental issue that deployment of a prognostic model into clinical practice will lead to changes in decision making or interventions, and hence, changes in clinical outcomes. This has received little attention in the prognostic modeling literature but is important because this changes predictor-outcome associations, meaning that the performance of the model degrades over time; therefore, prognostic models become “victims of their own success.” More seriously, a prediction from such a model is challenging to interpret, as it implicitly reflects both the risk factors and the interventions that similar patients received, in the historical data used to develop the prognostic model. The authors rightly point out that “holistically modeling the outcome and interventions(s)” and “incorporat[ing] the intervention space” are required to overcome this concern. However, the proposed solution of directly modeling interventions, or their surrogates, is not sufficient. An explicit causal inference framework is required. When the intended use of a prognostic model is to support decisions concerning intervention(s), the counterfactual causal framework provides a natural and powerful way to ensure that predictions issued by the prognostic model are useful, interpretable, and less vulnerable to degradation over time. The framework allows predictions to be used to answer “what if” questions; for an introduction, see Hernan and Robbins. However, appropriate modeling of these counterfactual scenarios is far more challenging than pure prediction, particularly in the presence of time-dependent confounding. Here, standard regression modeling becomes inadequate and specialist techniques are required. In the scenarios carefully articulated by Lenert et al, in which risk models are used to alert to a high-risk situation and thereby inform intervention, one should primarily be interested in the counterfactual “treatment-naïve” prediction: in other words, “what is the risk of outcome for this individual if we do not intervene?” Failure to explicitly model this treatment-naïve prediction will lead to high-risk patients being classified inappropriately as low risk, as their prediction is reflective of interventions made to lower the risk of similar patients in the past. This situation becomes more pronounced when a successful model is updated, as interventions made based on the predictions from the model are hoped to change the risk. Recently, we illustrated how to calculate treatment-naïve risk in the presence of “treatment drop-in,” a scenario in which patients begin taking treatments after the time a prediction is made but before the outcome. With treatment-naïve risk as a baseline, one can move to evaluating predictions under a range of different interventions; the counterfactual causal framework allows a model to be interrogated with a series of “what if” questions. Comparison of the outcome predictions or distributions under different scenarios can then naturally provide information to support intervention decisions. Alongside this counterfactual framework, we agree with Lenert et al that “robust performance surveillance of models in clinical use” is required postdeployment as part of prognostic model maintenance and model surveillance. However, doing this through so-called static updating, in which previous iterations of a risk model are refined according to new datasets observed in batches, still requires timely identification of performance drift. This often leads to an identification-action latency period, in which noticing and acting on a deterioration in a model’s performance occurs much later in time than should be acceptable in clinical practice. This is amplified by a lower frequency of updating but could be mitigated through continuous surveillance and maintenance of the prognostic models. So-called dynamic modeling is an emerging area of research that enables the continuous incorporation of surveillance and refinement directly into the modeling processes and could prevent prognostic models being “victims of their own success” if combined appropriately with counterfactual frameworks. While counterfactual prediction is only beginning to be applied in prognostic model development, it is a technique that will allow many of the issues eloquently described by Lenert and colleagues to be mitigated. Moreover, it provides predictions that are arguably closer to what a decision maker needs, and likely to be more robust over time.

CONFLICT OF INTEREST STATEMENT

None declared.
  4 in total

1.  Prediction models in obstetrics: understanding the treatment paradox and potential solutions to the threat it poses.

Authors:  F Cheong-See; J Allotey; N Marlin; B W Mol; E Schuit; G Ter Riet; R D Riley; Kgm Moons; K S Khan; S Thangaratinam
Journal:  BJOG       Date:  2016-01-25       Impact factor: 6.531

2.  Prognostic models will be victims of their own success, unless….

Authors:  Matthew C Lenert; Michael E Matheny; Colin G Walsh
Journal:  J Am Med Inform Assoc       Date:  2019-12-01       Impact factor: 4.497

Review 3.  Dynamic models to predict health outcomes: current status and methodological challenges.

Authors:  David A Jenkins; Matthew Sperrin; Glen P Martin; Niels Peek
Journal:  Diagn Progn Res       Date:  2018-12-18

4.  Using marginal structural models to adjust for treatment drop-in when developing clinical prediction models.

Authors:  Matthew Sperrin; Glen P Martin; Alexander Pate; Tjeerd Van Staa; Niels Peek; Iain Buchan
Journal:  Stat Med       Date:  2018-08-02       Impact factor: 2.373

  4 in total
  4 in total

Review 1.  Review of Clinical Research Informatics.

Authors:  Anthony Solomonides
Journal:  Yearb Med Inform       Date:  2020-08-21

Review 2.  Prediction or causality? A scoping review of their conflation within current observational research.

Authors:  Chava L Ramspek; Ewout W Steyerberg; Richard D Riley; Frits R Rosendaal; Olaf M Dekkers; Friedo W Dekker; Merel van Diepen
Journal:  Eur J Epidemiol       Date:  2021-08-15       Impact factor: 8.082

Review 3.  A Causal Framework for Making Individualized Treatment Decisions in Oncology.

Authors:  Pavlos Msaouel; Juhee Lee; Jose A Karam; Peter F Thall
Journal:  Cancers (Basel)       Date:  2022-08-14       Impact factor: 6.575

4.  Counterfactual prediction is not only for causal inference.

Authors:  Barbra A Dickerman; Miguel A Hernán
Journal:  Eur J Epidemiol       Date:  2020-07       Impact factor: 8.082

  4 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.