BACKGROUND: Recent proposals suggest that risk-stratified analyses of clinical trials be routinely performed to better enable tailoring of treatment decisions to individuals. Trial data can be stratified using externally developed risk models (eg, Framingham risk score), but such models are not always available. We sought to determine whether internally developed risk models, developed directly on trial data, introduce bias compared with external models. METHODS AND RESULTS: We simulated a large patient population with known risk factors and outcomes. Clinical trials were then simulated by repeatedly drawing from the patient population assuming a specified relative treatment effect in the experimental arm, which either did or did not vary according to a subject's baseline risk. For each simulated trial, 2 internal risk models were developed on either the control population only (internal controls only) or the whole trial population blinded to treatment (internal whole trial). Bias was estimated for the internal models by comparing treatment effect predictions to predictions from the external model. Under all treatment assumptions, internal models introduced only modest bias compared with external models. The magnitude of these biases was slightly smaller for internal whole trial models than for internal controls only models. Internal whole trial models were also slightly less sensitive to bias introduced by overfitting and less sensitive to falsely identifying the existence of variability in treatment effect across the risk spectrum compared with internal controls only models. CONCLUSIONS: Appropriately developed internal models produce relatively unbiased estimates of treatment effect across the spectrum of risk. When estimating treatment effect, internally developed risk models using both treatment arms should, in general, be preferred to models developed on the control population.
BACKGROUND: Recent proposals suggest that risk-stratified analyses of clinical trials be routinely performed to better enable tailoring of treatment decisions to individuals. Trial data can be stratified using externally developed risk models (eg, Framingham risk score), but such models are not always available. We sought to determine whether internally developed risk models, developed directly on trial data, introduce bias compared with external models. METHODS AND RESULTS: We simulated a large patient population with known risk factors and outcomes. Clinical trials were then simulated by repeatedly drawing from the patient population assuming a specified relative treatment effect in the experimental arm, which either did or did not vary according to a subject's baseline risk. For each simulated trial, 2 internal risk models were developed on either the control population only (internal controls only) or the whole trial population blinded to treatment (internal whole trial). Bias was estimated for the internal models by comparing treatment effect predictions to predictions from the external model. Under all treatment assumptions, internal models introduced only modest bias compared with external models. The magnitude of these biases was slightly smaller for internal whole trial models than for internal controls only models. Internal whole trial models were also slightly less sensitive to bias introduced by overfitting and less sensitive to falsely identifying the existence of variability in treatment effect across the risk spectrum compared with internal controls only models. CONCLUSIONS: Appropriately developed internal models produce relatively unbiased estimates of treatment effect across the spectrum of risk. When estimating treatment effect, internally developed risk models using both treatment arms should, in general, be preferred to models developed on the control population.
Authors: Scott M Grundy; James I Cleeman; C Noel Bairey Merz; H Bryan Brewer; Luther T Clark; Donald B Hunninghake; Richard C Pasternak; Sidney C Smith; Neil J Stone Journal: Circulation Date: 2004-07-13 Impact factor: 29.690
Authors: Theodore J Iwashyna; James F Burke; Jeremy B Sussman; Hallie C Prescott; Rodney A Hayward; Derek C Angus Journal: Am J Respir Crit Care Med Date: 2015-11-01 Impact factor: 21.405
Authors: Richard Wyss; Ben B Hansen; Alan R Ellis; Joshua J Gagne; Rishi J Desai; Robert J Glynn; Til Stürmer Journal: Am J Epidemiol Date: 2017-05-01 Impact factor: 4.897
Authors: David van Klaveren; Yvonne Vergouwe; Vasim Farooq; Patrick W Serruys; Ewout W Steyerberg Journal: J Clin Epidemiol Date: 2015-02-27 Impact factor: 6.437
Authors: Sanjay Basu; Jeremy B Sussman; Seth A Berkowitz; Rodney A Hayward; John S Yudkin Journal: Lancet Diabetes Endocrinol Date: 2017-08-10 Impact factor: 32.069
Authors: Catherine M Viscoli; David M Kent; Robin Conwit; Jennifer L Dearborn; Karen L Furie; Mark Gorman; Peter D Guarino; Silvio E Inzucchi; Amber Stuart; Lawrence H Young; Walter N Kernan Journal: Stroke Date: 2018-12-10 Impact factor: 7.914
Authors: Alexis Matteau; Robert W Yeh; Edoardo Camenzind; P Gabriel Steg; William Wijns; Joseph Mills; Anthony Gershlick; Mark de Belder; Gregory Ducrocq; Laura Mauri Journal: Am J Cardiol Date: 2015-06-03 Impact factor: 2.778