Jonathan Karnon1, Tazio Vanni. 1. University of Adelaide, Adelaide, South Australia, Australia. jonathan.karnon@adelaide.edu.au
Abstract
BACKGROUND: The importance of assessing the accuracy of health economic decision models is widely recognized. Many applied decision models (implicitly) assume that the process of identifying relevant values for a model's input parameters is sufficient to prove the model's accuracy. The selection of infeasible combinations of input parameter values is most likely in the context of probabilistic sensitivity analysis (PSA), where parameter values are drawn from independently specified probability distributions for each model parameter. Model calibration involves the identification of input parameter values that produce model output parameters that best predict observed data. METHODS: An empirical comparison of three key calibration issues is presented: the applied measure of goodness of fit (GOF); the search strategy for selecting sets of input parameter values; and the convergence criteria for determining acceptable GOF. The comparisons are presented in the context of probabilistic calibration, a widely applicable approach to calibration that can be easily integrated with PSA. The appendix provides a user's guide to probabilistic calibration, with the reader invited to download the Microsoft® Excel-based model reported in this article. RESULTS: The calibrated models consistently provided higher mean estimates of the models' output parameter, illustrating the potential gain in accuracy derived from calibrating decision models. Model uncertainty was also reduced. The chi-squared GOF measure differentiated between the accuracy of different parameter sets to a far greater degree than the likelihood GOF measure. The guided search strategy produced higher mean estimates of the models' output parameter, as well as a narrower range of predicted output values, which may reflect greater precision in the identification of candidate parameter sets or more limited coverage of the parameter space. The broader convergence threshold resulted in lower mean estimates of the models' output, and slightly wider ranges, which were closer to the outputs associated with the non-calibrated approach. CONCLUSIONS: Probabilistic calibration provides a broadly applicable method that will improve the relevance of health economic decision models, and simultaneously reduce model uncertainty. The analyses reported in this paper inform the more efficient and accurate application of calibration methods for health economic decision models.
BACKGROUND: The importance of assessing the accuracy of health economic decision models is widely recognized. Many applied decision models (implicitly) assume that the process of identifying relevant values for a model's input parameters is sufficient to prove the model's accuracy. The selection of infeasible combinations of input parameter values is most likely in the context of probabilistic sensitivity analysis (PSA), where parameter values are drawn from independently specified probability distributions for each model parameter. Model calibration involves the identification of input parameter values that produce model output parameters that best predict observed data. METHODS: An empirical comparison of three key calibration issues is presented: the applied measure of goodness of fit (GOF); the search strategy for selecting sets of input parameter values; and the convergence criteria for determining acceptable GOF. The comparisons are presented in the context of probabilistic calibration, a widely applicable approach to calibration that can be easily integrated with PSA. The appendix provides a user's guide to probabilistic calibration, with the reader invited to download the Microsoft® Excel-based model reported in this article. RESULTS: The calibrated models consistently provided higher mean estimates of the models' output parameter, illustrating the potential gain in accuracy derived from calibrating decision models. Model uncertainty was also reduced. The chi-squared GOF measure differentiated between the accuracy of different parameter sets to a far greater degree than the likelihood GOF measure. The guided search strategy produced higher mean estimates of the models' output parameter, as well as a narrower range of predicted output values, which may reflect greater precision in the identification of candidate parameter sets or more limited coverage of the parameter space. The broader convergence threshold resulted in lower mean estimates of the models' output, and slightly wider ranges, which were closer to the outputs associated with the non-calibrated approach. CONCLUSIONS: Probabilistic calibration provides a broadly applicable method that will improve the relevance of health economic decision models, and simultaneously reduce model uncertainty. The analyses reported in this paper inform the more efficient and accurate application of calibration methods for health economic decision models.
Authors: Z Philips; L Ginnelly; M Sculpher; K Claxton; S Golder; R Riemsma; N Woolacoot; J Glanville Journal: Health Technol Assess Date: 2004-09 Impact factor: 4.014
Authors: Tazio Vanni; Jonathan Karnon; Jason Madan; Richard G White; W John Edmunds; Anna M Foss; Rosa Legood Journal: Pharmacoeconomics Date: 2011-01 Impact factor: 4.981
Authors: A Howell; J Cuzick; M Baum; A Buzdar; M Dowsett; J F Forbes; G Hoctin-Boes; J Houghton; G Y Locker; J S Tobias Journal: Lancet Date: 2005 Jan 1-7 Impact factor: 79.321
Authors: Beat Thürlimann; Aparna Keshaviah; Alan S Coates; Henning Mouridsen; Louis Mauriac; John F Forbes; Robert Paridaens; Monica Castiglione-Gertsch; Richard D Gelber; Manuela Rabaglio; Ian Smith; Andrew Wardley; Andrew Wardly; Karen N Price; Aron Goldhirsch Journal: N Engl J Med Date: 2005-12-29 Impact factor: 91.245
Authors: Jane J Kim; Karen M Kuntz; Natasha K Stout; Salaheddin Mahmud; Luisa L Villa; Eduardo L Franco; Sue J Goldie Journal: Am J Epidemiol Date: 2007-05-25 Impact factor: 4.897
Authors: Tazio Vanni; Jonathan Karnon; Jason Madan; Richard G White; W John Edmunds; Anna M Foss; Rosa Legood Journal: Pharmacoeconomics Date: 2011-01 Impact factor: 4.981
Authors: Zanfina Ademi; Hansoo Kim; Ella Zomer; Christopher M Reid; Bruce Hollingsworth; Danny Liew Journal: Br J Clin Pharmacol Date: 2013-04 Impact factor: 4.335
Authors: Anil Pooran; Grant Theron; Lynn Zijenah; Duncan Chanda; Petra Clowes; Lawrence Mwenge; Farirai Mutenherwa; Paul Lecesse; John Metcalfe; Hojoon Sohn; Michael Hoelscher; Alex Pym; Jonny Peter; David Dowdy; Keertan Dheda Journal: Lancet Glob Health Date: 2019-06 Impact factor: 26.763
Authors: Karl Johnson; Katherine W Saylor; Isabella Guynn; Karen Hicklin; Jonathan S Berg; Kristen Hassmiller Lich Journal: Genet Med Date: 2021-12-07 Impact factor: 8.822
Authors: Fernando Alarid-Escudero; Richard F MacLehose; Yadira Peralta; Karen M Kuntz; Eva A Enns Journal: Med Decis Making Date: 2018-10 Impact factor: 2.583