Literature DB >> 31496357

From Individual to Population Preferences: Comparison of Discrete Choice and Dirichlet Models for Treatment Benefit-Risk Tradeoffs.

Tommi Tervonen1, Francesco Pignatti2, Douwe Postmus3.   

Abstract

Introduction. The Dirichlet distribution has been proposed for representing preference heterogeneity, but there is limited evidence on its suitability for modeling population preferences on treatment benefits and risks. Methods. We conducted a simulation study to compare how the Dirichlet and standard discrete choice models (multinomial logit [MNL] and mixed logit [MXL]) differ in their convergence to stable estimates of population benefit-risk preferences. The source data consisted of individual-level tradeoffs from an existing 3-attribute patient preference study (N = 560). The Dirichlet population model was fit directly to the attribute weights in the source data. The MNL and MXL population models were fit to the outcomes of a simulated discrete choice experiment in the same sample of 560 patients. Convergence to the parameter values of the Dirichlet and MNL population models was assessed with sample sizes ranging from 20 to 500 (100 simulations per sample size). Model variability was also assessed with coefficient P values. Results. Population preference estimates of all models were very close to the sample mean, and the MNL and MXL models had good fit (McFadden's adjusted R2 = 0.12 and 0.13). The Dirichlet model converged reliably to within 0.05 distance of the population preference estimates with a sample size of 100, where the MNL model required a sample size of 240 for this. The MNL model produced consistently significant coefficient estimates with sample sizes of 100 and higher. Conclusion. The Dirichlet model is likely to have smaller sample size requirements than standard discrete choice models in modeling population preferences for treatment benefit-risk tradeoffs and is a useful addition to health preference analyst's toolbox.

Entities:  

Keywords:  decision analysis; health preference elicitation; patient choice modeling; pharmacoepidemiology

Mesh:

Year:  2019        PMID: 31496357      PMCID: PMC6843605          DOI: 10.1177/0272989X19873630

Source DB:  PubMed          Journal:  Med Decis Making        ISSN: 0272-989X            Impact factor:   2.583


Preference studies are increasingly being used to support health policy decision making with regulatory agencies recently expressing interest in preference-based benefit-risk assessment.[1-3] Discrete choice experiments (DCEs) are the most commonly used method for eliciting benefit-risk tradeoffs in the health domain.[4] In a DCE, benefit-risk tradeoffs are inferred from a series of choice questions in which participants are asked to choose between 2 or more hypothetical treatment profiles. The utility that a participant obtains from a treatment profile is assumed to be a random variable whose expected value is expressed as a function of the attribute levels that constitute that treatment profile. The regression coefficients of this function are the parameters of interest for the DCE and can be used to calculate attribute weights that express the marginal rate of substitution between 2 attributes. Because of the limited amount of information that is obtained with each discrete choice question, DCEs often require hundreds of answers for estimating the preference parameters (i.e., the benefit-risk tradeoffs, possibly conditional on a set of explanatory covariates) sufficiently accurately.[5] Moreover, the maximum number of attributes respondents can handle in DCEs is limited; the exact number is context dependent,[6] but most recently published health DCEs have used between 4 and 9 attributes.[7] Finally, although more complex statistical models allow distinguishing and characterizing preference heterogeneity,[8] DCEs rarely allow estimating individual-level utility functions with high precision.[9] Other preference elicitation and modeling methods have been developed to overcome these challenges. Instead of assuming that utility is a latent variable, multicriteria decision analysis is based on normative models of rational choice that state that a subject’s preference structure can be represented by means of a utility function if that subject’s choice behavior satisfies certain basic rationality axioms, such as completeness and transitivity. Several direct valuation methods have been developed to elicit the parameters of this function, which has an additive structure when the attributes under consideration are preferentially independent for the decision maker. The parameters of interest for the additive value model are the attribute weights and the marginal gain or loss in utility from increasing attribute values (i.e., the so-called partial value or utility functions). Swing weighting and other direct valuation methods can handle a larger number of attributes, and their questioning procedures are designed in such a way that they completely identify an individual’s utility function. However, when direct valuation methods are used, and the analyst wants to generalize from the sample to the population, a statistical model for the data-generating process needs to be specified. Various authors have proposed using the Dirichlet distribution for modeling the distribution of the attribute weights in the population.[10-12] The Dirichlet distribution is particularly compelling for this purpose, given its support is the simplex (i.e., the full feasible space of attribute weights when they are normalized to sum to unity). However, there is limited empirical evidence on the use of the Dirichlet distribution to model population preferences, including an understanding of the convergence of the parameter estimates with sample sizes commonly encountered in health preference studies. This article aims to fill this evidence gap by evaluating the use of the Dirichlet distribution for modeling population preferences. We compare the Dirichlet distribution to the multinomial logit model (MNL) commonly used for modeling preferences as captured with a DCE, by conducting computational experiments with data collected in a previous preference study. We also fit a mixed logit (MXL) model to the data and discuss the differences between the MXL and Dirichlet approaches.

Methods

We conducted a simulation study to compare how the Dirichlet and MNL models differ in their convergence to stable estimates of the population preferences. We based our computational experiments on an existing study[13] that used an online questionnaire with choice-based matching questions to elicit the preferences of 560 patients with multiple myeloma for hypothetical cancer treatments. The treatments were described in terms of the following 3 attributes: probability of being progression free for 1 year or longer (index: 1; levels: 50%, 60%, 70%, 80%, and 90%), risk of moderate but chronic toxicity (index: 2; levels: 45%, 55%, 65%, 75%, and 85%), and risk of severe toxicity (index: 3; levels: 20%, 35%, 50%, 65%, and 80%). The source data consisted of a set of N = 560 weight vectors (1 for each patient) that were derived from the patients’ responses to the choice-based matching questions. Using these real data, instead of simulated data, may provide better evidence on the methods’ convergence in a realistic setting. For the Dirichlet model, the utility that a random patient i obtains from a hypothetical treatment j with attribute values was specified as Here, the attribute weights , which are nonnegative and normalized to sum to unity, are Dirichlet distributed with density . To estimate , we fit a Dirichlet regression model directly to the attribute weights in the source data. We refer to this fitted distribution as . For comparison purposes, we also fit an MNL model and MXL model to the outcomes of a simulated DCE in the same sample of 560 patients. To obtain discrete choice data sets for fitting the 2 models, we simulated a DCE with the following design. First, we generated an orthogonal design using the method[14] with 2 choice alternatives and 5 levels for each of the 3 attributes. Then, we filtered out questions in which 1 of the choice alternatives was dominated (i.e., would have higher risk of both toxicities and lower probability of 1-year progression-free survival). This resulted in a set of 16 possible discrete choice questions. Next, we simulated the individual patient responses based on the behavioral assumption that, for any given question, patients choose the alternative that provides the greatest utility. The utility that patient i obtained from alternative j in choice question k was generated according to the following equation: Here, is a Gumbel distributed error term that was added to the utility values of the choice alternatives in each question to make the resulting choice behavior consistent with the assumptions underlying the MNL and MXL models.[15-17] The attribute weights in this equation were directly taken from the source data. To determine a suitable scale value for the Gumbel distributed error term, we conducted for each value of a set of 1000 experiments in which each of the N = 560 patients were simulated to answer all 16 discrete choice questions. These results indicated that the expected Euclidean distance between the sample mean of the attribute weights (i.e., the arithmetic mean of the attribute weights of all 560 patients in the sample) and the normalized attribute weights calculated from the regression coefficients of the fitted MNL models was minimal when the Gumbel scale value was set equal to (see the results in the Supplementary Material). This scale value was therefore used for further simulations of the MNL and MXL models. To fit the MNL and MXL models to the outcomes of the simulated DCE in the sample of 560 patients, we used linear models with continuous level encoding to estimate only the coefficients that express marginal rates of substitution between the attributes: Here, the vector of preference weights is either fixed (MNL) or independent, normal-distributed (MXL) and Gumbel distributed with . We refer to these fitted distributions (deterministic distribution in case of MNL) as and , respectively. The normalized attribute weights can be obtained from the preference weights by multiplying the latter with the attribute scale variation and then applying rescaling so that they sum to unity. To achieve comparability of the results of the different models, all comparisons were made at the level of the normalized attribute weights. To assess the goodness of fit of the 3 models, the mean normalized attribute weights from the fitted models were compared with the sample mean of the attribute weights in the source data. Standard errors and 95% confidence intervals (CIs) for the mean attribute weights of the MNL and MXL models were obtained using the delta method.[18] The 95% CI for the sample mean and the mean attribute weights of the Dirichlet model were obtained through bootstrapping. For the Dirichlet and MXL model, the fitted distribution of the attribute weights was also visually compared with the actual distribution of the attribute weights in the source data. To assess the convergence of the MNL and Dirichlet models to the previously fitted population models and , we conducted a series of computational experiments with a varying number of respondents. The DCE data sets for the MNL models were constructed by simulating 6 discrete choice questions from the set of 16 possible questions for each participant. The questions were resampled for each simulated respondent to minimize errors due to inefficient experimental design. The choice probabilities for the choice alternatives in these questions were obtained directly from the logit probabilities evaluated at . For the fitting of the Dirichlet models, attribute weights were randomly sampled from . We varied the number of simulated respondents between 20 and 500 and repeated each simulation 100 times to assess the variance of the results. We measured convergence by calculating the Euclidean distance between the mean normalized attribute weights from the fitted models and the mean normalized attribute weights from and . By measuring convergence to the (normalized) means of the previously fitted population models rather than to the sample mean of the attribute weights in the source data, we are able to assess convergence under ideal circumstances, where no bias is caused by misspecification of the preference model. In addition to the Euclidean distance, we evaluated the MNL model coefficients’P values to understand when the hypothetical analyst could consider the results to be sufficiently accurate. Finally, we measured maximum acceptable risks of the adverse event (AE) attributes to assess whether the results in terms of key behavioral outputs are different from the results of individual model parameters. All simulations were implemented in R. The MXL model was estimated using 5000 Halton draws. All program code and the full-source data set are available online.[19] This research has received no external funding.

Results

The fitted Dirichlet, MNL, and MXL population models as well as the sample mean of the attribute weights in the source data are presented in Table 1. All models approximated the sample mean well, and the MNL and MXL models had reasonably good fits (adjusted R2 = 0.12 and 0.13).
Table 1

Estimates of Population Means of Normalized Attribute Weights Based on the Sample Mean and the Fitted Dirichlet, MNL, and MXL Models, as Well as the Maximum Acceptable Risks of AEs to Increase the Probability of 1-Year PFS by 1%

AttributeSample Mean (95% CI)Dirichlet
MNL
MXL
Alpha (SE[a])Normalized Weight (95% CI)Mean (SE)Normalized Weight (95% CI)Mean (SE)Normalized Weight (95% CI)SD (SE)
1-year PFS0.54 (0.52–0.55)2.963 (0.045)0.52 (0.50–0.54)0.033 (0.001)0.54 (0.52–0.56)0.047 (0.002)0.54 (0.52–0.56)0.021 (0.002)
Moderate AEs0.14 (0.13–0.15)0.969 (0.043)0.17 (0.16–0.18)−0.009 (0.001)0.13 (0.11–0.16)−0.012 (0.001)0.13 (0.11–0.16)0.012 (0.003)
Severe AEs0.32 (0.30–0.34)1.792 (0.044)0.31 (0.30–0.33)−0.015 (0.001)0.33 (0.31–0.35)−0.019 (0.001)0.33 (0.31–0.35)0.013 (0.002)
MAR (SE) moderate AEs3.79% (0.16)3.06% (0.11)4.15% (0.46)4.00% (0.42)
MAR (SE) severe AEs2.49% (0.11)2.48% (0.11)2.44% (0.12)2.47% (0.12)
Adjusted McFadden’s R 20.120.13

AE, adverse event; CI, confidence interval; MNL, multinomial logit; MXL, mixed logit; PFS, progression-free survival; SD, standard deviation; SE, standard error; MAR, maximum acceptable risk.

Dirichlet distribution SEs are on log scale. The 95% confidence intervals are [2.711, 3.239] (PFS), [0.890, 1.055] (moderate AEs), [1.642, 1.956] (severe AEs).

Estimates of Population Means of Normalized Attribute Weights Based on the Sample Mean and the Fitted Dirichlet, MNL, and MXL Models, as Well as the Maximum Acceptable Risks of AEs to Increase the Probability of 1-Year PFS by 1% AE, adverse event; CI, confidence interval; MNL, multinomial logit; MXL, mixed logit; PFS, progression-free survival; SD, standard deviation; SE, standard error; MAR, maximum acceptable risk. Dirichlet distribution SEs are on log scale. The 95% confidence intervals are [2.711, 3.239] (PFS), [0.890, 1.055] (moderate AEs), [1.642, 1.956] (severe AEs). Figure 1 presents the results from the computational experiments assessing model convergence. For both the Dirichlet and MNL models, the estimated mean normalized attribute weights converged toward the population mean values in Table 1. The Dirichlet model seems to converge better than the MNL model: the mean attribute weights of the fitted Dirichlet models converged to within 0.05 distance of the mean of with a sample size of 100 in 97% of the simulations, whereas the fitted MNL models required a sample size of 240 to converge to the same distance of the normalized mean attribute weights of in 96% of the simulations. Similar results were observed when convergence was assessed in terms of maximum acceptable risks (see the results in the Supplementary Material).
Figure 1

Box plots of convergence of the multinomial logit (MNL; top) and Dirichlet (bottom) models to the fitted population models with varying sample sizes; the dashed blue line indicates the Euclidean distance 0.15 that has been used to truncate the data set.

Box plots of convergence of the multinomial logit (MNL; top) and Dirichlet (bottom) models to the fitted population models with varying sample sizes; the dashed blue line indicates the Euclidean distance 0.15 that has been used to truncate the data set. Figure 2 presents convergence of the MNL model with respect to the P value of the least important attribute (moderate AEs). The results indicate that the MNL model consistently produced significant (P < 0.05) estimates with a sample size of 100 or higher. With sample sizes of 60 to 80, there were some simulations in which the analyst would not be able to conclude the significance of the estimate.
Figure 2

Significance (P value) of the least-important attribute (moderate adverse effects) in the multinomial logit model, with sample size varying from 20 to 100; the P value was <0.05 in all simulations in which the number of respondents was >100.

Significance (P value) of the least-important attribute (moderate adverse effects) in the multinomial logit model, with sample size varying from 20 to 100; the P value was <0.05 in all simulations in which the number of respondents was >100. Figure 3 compares the distribution of the attribute weights in the original study with the distribution of the attribute weights for the fitted Dirichlet and MXL models. Samples drawn from the fitted MXL and Dirichlet models have similar spread over the preference space, although the Dirichlet model seems to have a slightly higher dispersion than the MXL model. Both models seem to describe the source data reasonably well.
Figure 3

Weights from the original study (left) and the same number of samples (N = 560) from the MXL model (center) and Dirichlet model (right); red dots indicate sample mean (source data) and distribution means (MXL and Dirichlet). AE, adverse event; MXL, mixed logit; PFS, progression-free survival.

Weights from the original study (left) and the same number of samples (N = 560) from the MXL model (center) and Dirichlet model (right); red dots indicate sample mean (source data) and distribution means (MXL and Dirichlet). AE, adverse event; MXL, mixed logit; PFS, progression-free survival.

Discussion

Our computational experiments demonstrated that the Dirichlet model is likely to have smaller sample size requirements than the MNL model in modeling population benefit-risk preferences. Although we have no quantitative evidence of differences between the Dirichlet and MXL approaches, the full-sample preference distributions seemed similar, which indicates that the Dirichlet distribution may also be appropriate for modeling preference heterogeneity in benefit-risk tradeoffs. Importantly, our results indicate that the Dirichlet distribution is able to represent the population benefit-risk tradeoffs once they are captured using an elicitation technique, such as the choice-based matching that was applied in the source data study. This implication has direct practical relevance for treatment benefit-risk analyses using methods that apply the Dirichlet distribution[20,21]: our results demonstrate that the full distribution, including the concentration parameter that has previously been undefined, can reliably be estimated with reasonably small sample sizes. Fitting a Dirichlet distribution with a standard maximum likelihood procedure requires per-respondent tradeoff weights to be available in complete format. These are usually obtained with a direct elicitation procedure, which is generally thought to be more demanding to complete than indirect procedures such as DCEs, as they require preferences to be expressed in precise cardinal terms. Therefore, direct elicitation procedures often require facilitation, making their application a resource-intensive exercise with larger samples.[22] However, once the per-respondent preferences are available, understanding their distribution requires less modeling than what is needed to analyze discrete choice data. This study has some important limitations. First, we conducted experiments on only a single data set that contained 3 attributes. Most health preference studies are conducted on larger sets of benefit, risk, and process attributes. However, the Dirichlet distribution is well understood, and we would not expect the estimate precision to suffer more from an increase in dimensionality than the MNL model. Furthermore, using only a three-attribute data set has the additional advantage of the preference space being 2 dimensional, and therefore, it can be easily visualized. Future research should assess the Dirichlet approach in studies with more attributes. Second, we did not compare the Dirichlet and MXL models in the experiments because 1) MXL estimation is much more time-consuming than MNL estimation and 2) specifying preferences that adhere to the MXL distributional assumptions would have added an extra layer of complexity to the experiments. Third, we considered the Dirichlet model only for the case in which respondent preferences are available in a complete format. In practice, there may be partial or incomplete preference data for some respondents, such as ranking of the attribute scale swings instead of the exact tradeoff weights. Future research should consider estimation and convergence of the Dirichlet model in such cases. Click here for additional data file. Supplemental material, Supplementary_Fig-S1.rjf_online_supp for From Individual to Population Preferences: Comparison of Discrete Choice and Dirichlet Models for Treatment Benefit-Risk Tradeoffs by Tommi Tervonen, Francesco Pignatti and Douwe Postmus in Medical Decision Making Click here for additional data file. Supplemental material, Supplementary_Fig-S2.rjf_online_supp for From Individual to Population Preferences: Comparison of Discrete Choice and Dirichlet Models for Treatment Benefit-Risk Tradeoffs by Tommi Tervonen, Francesco Pignatti and Douwe Postmus in Medical Decision Making Click here for additional data file. Supplemental material, Supplementary_Fig-S3.rjf_online_supp for From Individual to Population Preferences: Comparison of Discrete Choice and Dirichlet Models for Treatment Benefit-Risk Tradeoffs by Tommi Tervonen, Francesco Pignatti and Douwe Postmus in Medical Decision Making
  11 in total

1.  Fifty years after thalidomide; what role for drug regulators?

Authors:  Hans-Georg Eichler; Eric Abadie; Mary Baker; Guido Rasi
Journal:  Br J Clin Pharmacol       Date:  2012-11       Impact factor: 4.335

2.  A simple way to unify multicriteria decision analysis (MCDA) and stochastic multicriteria acceptability analysis (SMAA) using a Dirichlet distribution in benefit-risk assessment.

Authors:  Gaelle Saint-Hilary; Stephanie Cadour; Veronique Robert; Mauro Gasparini
Journal:  Biom J       Date:  2017-02-10       Impact factor: 2.207

Review 3.  Quantifying benefit-risk preferences for medical interventions: an overview of a growing empirical literature.

Authors:  A Brett Hauber; Angelyn O Fairchild; F Reed Johnson
Journal:  Appl Health Econ Health Policy       Date:  2013-08       Impact factor: 2.561

4.  Statistical Methods for the Analysis of Discrete Choice Experiments: A Report of the ISPOR Conjoint Analysis Good Research Practices Task Force.

Authors:  A Brett Hauber; Juan Marcos González; Catharina G M Groothuis-Oudshoorn; Thomas Prior; Deborah A Marshall; Charles Cunningham; Maarten J IJzerman; John F P Bridges
Journal:  Value Health       Date:  2016-05-12       Impact factor: 5.725

5.  MCDA swing weighting and discrete choice experiments for elicitation of patient benefit-risk preferences: a critical assessment.

Authors:  Tommi Tervonen; Heather Gelhorn; Sumitra Sri Bhashyam; Jiat-Ling Poon; Katharine S Gries; Anne Rentz; Kevin Marsh
Journal:  Pharmacoepidemiol Drug Saf       Date:  2017-07-11       Impact factor: 2.890

6.  Periodic benefit-risk assessment using Bayesian stochastic multi-criteria acceptability analysis.

Authors:  Kan Li; Shuai Sammy Yuan; William Wang; Shuyan Sabrina Wan; Paulette Ceesay; Joseph F Heyse; Shahrul Mt-Isa; Sheng Luo
Journal:  Contemp Clin Trials       Date:  2018-03-02       Impact factor: 2.226

7.  Constructing experimental designs for discrete-choice experiments: report of the ISPOR Conjoint Analysis Experimental Design Good Research Practices Task Force.

Authors:  F Reed Johnson; Emily Lancsar; Deborah Marshall; Vikram Kilambi; Axel Mühlbacher; Dean A Regier; Brian W Bresnahan; Barbara Kanninen; John F P Bridges
Journal:  Value Health       Date:  2013 Jan-Feb       Impact factor: 5.725

8.  Conducting discrete choice experiments to inform healthcare decision making: a user's guide.

Authors:  Emily Lancsar; Jordan Louviere
Journal:  Pharmacoeconomics       Date:  2008       Impact factor: 4.981

Review 9.  Discrete choice experiments in health economics: a review of the literature.

Authors:  Michael D Clark; Domino Determann; Stavros Petrou; Domenico Moro; Esther W de Bekker-Grob
Journal:  Pharmacoeconomics       Date:  2014-09       Impact factor: 4.981

10.  Individual Trade-Offs Between Possible Benefits and Risks of Cancer Treatments: Results from a Stated Preference Study with Patients with Multiple Myeloma.

Authors:  Douwe Postmus; Sarah Richard; Nathalie Bere; Gert van Valkenhoef; Jayne Galinsky; Eric Low; Isabelle Moulon; Maria Mavris; Tomas Salmonsson; Beatriz Flores; Hans Hillege; Francesco Pignatti
Journal:  Oncologist       Date:  2017-10-27
View more
  3 in total

1.  Multiple Criteria Decision Analysis (MCDA) for evaluating cancer treatments in hospital-based health technology assessment: The Paraconsistent Value Framework.

Authors:  Alessandro Gonçalves Campolina; Maria Del Pilar Estevez-Diz; Jair Minoro Abe; Patrícia Coelho de Soárez
Journal:  PLoS One       Date:  2022-05-25       Impact factor: 3.752

2.  Assessing Patient Preferences in Rare Diseases: Direct Preference Elicitation in the Rare Chronic Kidney Disease, Immunoglobulin A Nephropathy.

Authors:  Kevin Marsh; Kerrie-Anne Ho; Rachel Lo; Nancy Zaour; Aneesh Thomas George; Nigel S Cook
Journal:  Patient       Date:  2021-05-19       Impact factor: 3.883

3.  Patient Preferences for Multiple Myeloma Treatments: A Multinational Qualitative Study.

Authors:  Rosanne Janssens; Tamika Lang; Ana Vallejo; Jayne Galinsky; Ananda Plate; Kate Morgan; Elena Cabezudo; Raija Silvennoinen; Daniel Coriu; Sorina Badelita; Ruxandra Irimia; Minna Anttonen; Riikka-Leena Manninen; Elise Schoefs; Martina Vandebroek; Anneleen Vanhellemont; Michel Delforge; Hilde Stevens; Steven Simoens; Isabelle Huys
Journal:  Front Med (Lausanne)       Date:  2021-07-06
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.