Literature DB >> 35704682

An examination of machine learning to map non-preference based patient reported outcome measures to health state utility values.

Mona Aghdaee1, Bonny Parkinson1, Kompal Sinha2, Yuanyuan Gu1, Rajan Sharma1, Emma Olin1, Henry Cutler1.   

Abstract

Non-preference-based patient-reported outcome measures (PROMs) are popular in health outcomes research. These measures, however, cannot be used to estimate health state utilities, limiting their usefulness for economic evaluations. Mapping PROMs to a multi-attribute utility instrument is one solution. While mapping is commonly conducted using econometric techniques, failing to specify the complex interactions between variables may lead to inaccurate prediction of utilities, resulting in inaccurate estimates of cost-effectiveness and suboptimal funding decisions. These issues can be addressed using machine learning. This paper evaluates the use of machine learning as a mapping tool. We adopt a comprehensive approach to compare six machine learning techniques with eight econometric techniques to map the Patient-Reported Outcomes Measurement Information System Global Health 10 (PROMIS-GH10) to the EuroQol five dimensions (EQ-5D-5L). Using data collected from 2015 Australians, we find the least absolute shrinkage and selection operator (LASSO) model out-performed all machine learning techniques and the adjusted limited dependent variable mixture model (ALDVMM) out-performed all econometric techniques, with the LASSO performing better than ALDVMM. The variable selection feature of LASSO was then used to enhance the performance of the ALDVMM in a hybrid model. Our analysis identifies the potential benefits and challenges of using machine learning techniques for mapping and offers important insights for future research.
© 2022 The Authors. Health Economics published by John Wiley & Sons Ltd.

Entities:  

Keywords:  EQ-5D; PROMIS; econometrics; machine learning; mapping; utility

Mesh:

Year:  2022        PMID: 35704682      PMCID: PMC9545032          DOI: 10.1002/hec.4503

Source DB:  PubMed          Journal:  Health Econ        ISSN: 1057-9230            Impact factor:   2.395


INTRODUCTION

Patient‐reported outcome measures (PROMs) are being used more often in healthcare systems as funders increasingly seek value‐based care. Non‐preference based PROMs are increasingly included in clinical studies, health service management and research. However, these measures cannot be used to estimate health state utility values (henceforth “utilities”), limiting their usefulness for economic evaluations. Mapping non‐preference based PROMs to a multi‐attribute utility instrument (MAUI), which can be used to estimate utilities, is one solution to this problem (Kearns et al., 2013). Mapping is a statistical technique used to link outcomes from non‐preference‐based PROMs (“explanatory variables”) to a MAUI using an alternative data source. The benefits of mapping have been acknowledged in a review of the UK's National Institute for Health and Care Excellence (NICE) appraisals conducted over 2004–2008 (Tosh et al., 2011), which found an increase in the use of utility mapping approaches, accounting for over a quarter of total submissions. Consequently, the updated NICE guidelines for 2013 recommended using mapping to estimate utilities in the absence of direct utility measures (National Institute for Health and Care Excellence, 2013). Mapping is also accepted by the Pharmaceutical Benefits Advisory Committee and the Medical Services Advisory Committee (MSAC) in Australia as an alternative approach for estimating utilities in economic evaluations (Department of Health, 2016; Medical Services Advisory Committee, 2016). Existing literature on mapping health outcomes has adopted direct and indirect mapping approaches. The direct approach estimates utilities directly from explanatory variables. The indirect approach, also known as response mapping, first predicts the probabilities for each response to each MAUI question, and then uses relevant tariffs to convert them into utilities (Hernandez‐Alava et al., 2014). The resulting algorithm can be applied to PROM data to estimate the associated utilities, and thus quality adjusted life years (QALYs) (Wailoo et al., 2017). To investigate the validity of different mapping techniques, Brazier and colleagues undertook a systematic review of studies mapping between non‐preference based PROMs, generic preference based measures and MAUIs (Brazier et al., 2010). They found most studies using the direct approach typically adopted the linear, ordinary least square (OLS) regression technique to predict health state utilities. This could result in inaccurate prediction of utilities since utilities are bounded at one and have a distribution skewed to the left (Ara & Brazier, 2008; Brazier et al., 2010; Crott & Briggs, 2010; Rowen et al., 2009). Other popular econometric techniques used to directly predict utilities include Tobit (Sullivan & Ghushchyan, 2006), generalizsed linear model (GLM) (Sharma et al., 2019), censored least absolute deviation (CLAD) (Kaambwa et al., 2006; Sullivan & Ghushchyan, 2006), and median regression (Wu et al., 2007). Each of these techniques are better suited to predict utilities than the OLS technique particularly in accommodating unique characteristics of utilities being bounded, and clustering at one. Specifically, the standard Tobit technique accounts for the bounded utilities but does not allow for a gap below the mass of observations at one found in preference‐based measures (Sullivan & Ghushchyan, 2006). Certain families of GLM are able to accommodate flexible non‐linear relationships but may produce inconsistent estimates when the link function is misspecified (Dakin et al., 2010). Median regression is more robust to outliers (Shaw et al., 2010) but does not consider utilities being bounded. The CLAD extends the median regression with the dependent variable constrained on a fixed interval (Powell, 1984). However, since cost‐effectiveness analyses (the main reason for needing mapping exercises) are based on mean values, techniques based on medians are less useful. For indirect mapping, multinomial logit (MLOGIT), ordered logit (OLOGIT), and generalized ordered logit (GLOGIT) have been applied in the literature (Gray et al., 2006). Recently, mixture models such as the mixture beta regression model (Betamix) and the adjusted limited dependent variable mixture model (ALDVMM) have been adopted in mapping studies as preferred techniques due to their flexibility and ability to accommodate multimodality (Basu & Manca, 2012; Gray & Hernandez‐Alava, 2018; Hernandez‐Alava & Wailoo, 2015; Hernandez‐Alava et al., 2013; Khan & Morris, 2014; Yang, Wong, et al., 2019; Young et al., 2015). The Betamix is a two‐part model (consisting of a multinomial logit and a beta mixture model), which allows estimation of multimodal dependent variables bounded in an interval (Gray & Hernandez‐Alava, 2018) and has been shown to out‐perform linear regression (Khan & Morris, 2014; Yang, Wong, et al., 2019). ALDVMM is a mixture model of adjusted Tobit‐like distributions (Hernandez‐Alava & Wailoo, 2015), which deals with utility data's distributional features and accounts for the multimodality. ALDVMM assumes that utilities can be modeled as a mixture of multiple components, each representing a cluster of respondents with similar utility scores. It combines multiple component distributions with a multinomial logit model of the probabilities of component membership. ALDVMM has been shown to perform better than other traditional econometric techniques used in the mapping literature (Gray & Hernandez‐Alava, 2018). While mapping has become a common practice in estimating utilities, the characteristics of health utilities may limit the accuracy of mapping algorithms. In addition to being bounded highly skewed (full health) (Brazier et al., 2010), utilities often have conditional distributions that are not easily accommodated by standard parametric distributions. For economic evaluations, it is imperative for these utility predictions to be accurate. The relationships between PROMs and MAUIs are commonly non‐linear and involve complex interactions among explanatory variables. In standard econometric techniques used for mapping, selection of the distribution function and explanatory variables are based on prior knowledge of the clinical relationships between the variables for standard statistical tests. Moreover, the probabilistic distribution of the error terms is often not explicit and the relevant explanatory variables and their relationship with utilities are not immediately apparent. Failing to specify these relationships appropriately will reduce the accuracy of the mapping algorithm. One way to avoid this potential problem is to use machine learning techniques for mapping. The use of machine learning has increased in recent years in all areas of research (Athey & Imbens, 2019), including health economics. Applications include estimating the treatment effects of medical interventions (Kreif et al., 2015), analysis of prescribing patterns (Schilling et al., 2016), identifying thresholds and hierarchies in funding decisions (Schilling et al., 2017), and predicting healthcare costs (Konig et al., 2013). The key strengths of machine learning techniques compared to standard econometric techniques are prediction accuracy and parsimony as there is less requirement to impose parameters. Machine learning does not require prespecifying the probabilistic distribution of the error term, selecting explanatory variables, or assuming their inter‐relationships that is, additive or multiplicative interactions of their effect on the conditional mean of the outcome as well as their linear or non‐linear associations with the dependent variable (Varian, 2014). This is particularly useful when explanatory variables are numerous, and their significance and potential interactions are unknown. While it is not feasible to test all the possible combinations of explanatory variables with standard econometric techniques, machine learning has the advantage of using data driven techniques to determine the relationships between explanatory variables and outcome (Breiman et al., 1984; Strobl et al., 2009). The objective of this study was to evaluate the performance of machine learning techniques for mapping non‐preference based PROMs to MAUIs compared to standard econometric techniques. In the absence of preference based measures, mapping predicts utilities from non‐preference based PROMs. Given the predicted utilities are used in economic evaluation and ultimately in funding decisions, producing a robust and appropriate mapping algorithm is crucial as the accuracy of a mapping technique affects the predicted utilities, and thus the estimated cost‐effectiveness of an intervention. Thus, it is essential to compare the performance of commonly used econometric techniques with a selection of machine learning techniques, and choose the most accurate one (Yang, Devlin, & Luo, 2019). One of the most popular and well‐established PROM is the Patient Reported Outcomes Measurement Information System (PROMIS), developed by the National Institutes of Health in the United States in 2004. One of its three instrument types is the PROMIS short form Global Health 10 (PROMIS‐GH10), which is a generic measure of health focusing on physical, mental and social well‐being from the patient perspective (Cella et al., 2010; Hays et al., 2009). The PROMIS‐GH10 is widely used across the world as the gold standard for patient‐centered assessment. In Australia, New South Wales (NSW) Health has adopted PROMIS‐GH10 as a key evaluation component of the NSW Health Integrated Care Strategy (Thompson et al., 2016). In the UK, the National Institute for Health Research has supported validating and calibrating PROMIS‐GH10 for administration in clinical practices and research, in an attempt to unify the PROMs and shift toward a more patient‐centered health system (Evans et al., 2018). Internationally, PROMIS‐GH10 has been recommended as a core outcome measure in several clinical areas by the International Consortium for Health Outcomes Measurement (Nijagal et al., 2018; Salinas et al., 2016). The growing preference toward patient reported outcomes has resulted in a rapidly expanding literature using PROMIS‐GH10 to collect patient reported data. Since a commonly used measure in economic evaluations is the EuroQol five dimensions (EQ‐5D‐5L), this paper predicted utilities from the PROMIS‐GH10 response using EQ‐5D‐5L as the target measure of mapping. The relationship between PROMIS‐GH10 and EQ‐5D‐5L questions is not obvious and given the complexity of the possible interactions among the questions and different levels, there is potential to explore the latest techniques such as machine learning to improve mapping accuracy. This paper makes three important contributions to the literature. First, based on the techniques used in the existing literature, we used a range of econometric techniques including linear regression, Tobit, median regression, GLM, CLAD, Betamix, ALDVMM, and GLOGIT and machine learning techniques including classification and regression trees analysis (CART), bagged CART, random forests, Neural Networks (NN), quantile regression neural networks (QRNN), and least absolute shrinkage and selection operator (LASSO) to map from PROMIS‐GH10 to EQ‐5D‐5L. To the best of our knowledge, this is the first study to apply multiple machine learning techniques to map non‐preference based PROMs to a MAUI and compare them to econometric techniques. The only other study comparing the performance of econometric techniques to machine learning techniques was Park and Basu (2018), which assessed the predictive accuracy of these techniques in the context of risk‐adjustment in the health insurance market. Second, capitalizing on our approach of comparing techniques, we combine the best performing machine learning technique (LASSO) and best performing econometric technique (ALDVMM) to propose a hybrid model for prediction. This enabled us to highlight the advantage of combining machine learning and econometric techniques for better outcomes particularly since LASSO as a prediction technique cannot produce a mapping algorithm. Finally, while most existing studies focused on mapping PROMIS‐GH10 to EQ‐5D‐3L (Revicki et al., 2009; Thompson et al., 2017), we undertook the first mapping exercise to map from PROMIS‐GH10 to EQ‐5D‐5L, which has greater sensitivity and covers a wider range of health states. We provide a mapping algorithm to predict EQ‐5D‐5L utilities when only PROMIS‐GH10 data is collected but a health economic evaluation is desired. The rest of this paper is organized as follows. The next section describes our data and the measures of performance followed by Section 3, where we discuss the methods. Section 4 presents the results and Section 5 concludes with a discussion.

DATA

An online survey was conducted in February 2018 to collect responses to the PROMIS‐GH10 and EQ‐5D‐5L instruments from a representative general population of 2015 Australians (Hays et al., 2009; Herdman et al., 2011). The PROMIS‐GH10 consists of 10 questions about physical function, pain, fatigue, emotional distress, social health, and general perceptions of health. Each question measures the severity level ranging between one and five, except for pain which ranges from 0 to 10. Two summary scores of physical and mental health are derived from PROMIS‐GH10 (Hays et al., 2009). The five‐level version of the EQ‐5D has recently been introduced by the EuroQol Group to attain greater sensitivity to health states changes and a broader range of utilities than the previous three‐level version (EQ‐5D‐3L) (Janssen et al., 2008). The EQ‐5D‐5L consists of five questions about mobility, self‐care, usual activities, pain/discomfort, and anxiety/depression. Each dimension has five levels from having no problems to having extreme problems (Herdman et al., 2011). The EQ‐5D‐5L utilities were estimated using Australian tariffs (Norman et al., 2017), (see tab. 1, Approach 5 of their paper). Demographic information on age, sex, state, postcode, and an optional response to the Charlson Comorbidity Index (CCI) was also collected (Chaudhry et al., 2005).

METHODS

All statistical analyses were conducted in STATA 16 and R (Version 4.0.3). The mapping techniques used in this paper comply with the ISPOR Good Practices for Outcomes Research Task Force Report (Wailoo et al., 2017), and the Mapping onto Preference‐Based Measures Reporting Standards (MAPS) checklist (Dakin et al., 2018; Petrou et al., 2015) (see Appendix A for details).

Overview

We developed algorithms to predict the conditional mean of the target measure (here EQ‐5D‐5L utilities) from the observations of the source measure (here PROMIS‐GH10). The predictions were then compared with the actual target measure observations to assess the accuracy of the algorithms. In direct mapping, source measure or explanatory variables (here the PROMIS‐GH10 items or summary scores) were directly mapped onto the target measure or dependent variable (here EQ‐5D‐5L utility values). In comparison, indirect mapping was performed in two stages: the responses to each dimension of the target measure (EQ‐5D‐5L dimensions: mobility, self‐care, usual activity, pain and discomfort and depression and anxiety) were considered as the dependent variable; and then the predicted responses were combined using a relevant tariff to estimate utilities.

Measures of model performance

The performance of prediction models was measured by in‐sample cross‐validation using a k‐fold technique (Fushiki, 2011) for 10 folds. The dataset was randomly divided into k = 10 subsamples, of which k‐1 = 9 subsamples were used as the estimation sample, and one subsample was used as the validation sample for testing the accuracy of the predictions. This process was repeated 10 times with each of the 10 subsamples used once as the validation data. The 10‐fold cross‐validation was performed for both machine learning and econometrics techniques to enable comparability. The predictive accuracy was determined by the degree to which the predicted utilities reflected the observed utilities. The primary measure of predictive accuracy was average Mean Absolute Error (MAE) after truncation across the validation subsamples (Wailoo et al., 2017). While MAE was used as the primary measure of the predictive accuracy of each technique, other measures of predictive accuracy were also reported, including the MAE before truncation, the Mean Squared Error (MSE) before and after truncation, the predicted mean utility, the predicted minimum utility, and the predicted maximum utility. The predicted mean utility was reported as this is often used in cost‐effectiveness analyses, while the minimum and maximum utilities were reported to assess how the techniques performed in the extremes. Plots comparing the distribution of the observed versus predicted utilities were also presented to examine how each technique fits different parts of the distribution. It is important to note that the goodness of fit criteria was based on overall utility and does not reveal the prediction accuracy of the techniques relating to the underlying items. This does not affect the analysis as the objective was to assess the prediction accuracy relating to the overall utility for each respondent, which can be used in cost‐effectiveness analyses. Due to the lack of data on the five‐level version of EQ‐5D‐5L, no external dataset was available, thus, only internal cross‐validation was applied in this study.

Econometric techniques

Direct mapping

In direct mapping, where the explanatory variables (here PROMIS‐GH10 items or summary scores) were directly mapped onto the EQ‐5D‐5L utility values, seven econometric techniques were used. This included linear regression, Tobit, median regression, GLM, CLAD, Betamix, and ALDVMM. The dependent variable (target measure) in estimating the linear regression, Tobit, median regression, GLM and CLAD techniques was disutilities (=1‐utilities) and predictions were deducted from one to estimate utilities. We used the utilities as the dependent variable to estimate the Betamix and ALDVMM models. Four models based on the sets of explanatory variables were specified as follows: where EQ‐5D‐5L represents the predicted utility for the individual i. Set 1: Set 2: Set 3: Set 4: In set 1, EQ‐5D‐5L utilities were predicted using the physical (PROMIS‐GH10 physical‐score) and mental (PROMIS‐GH10 mental‐score) health summary scores of PROMIS‐GH10 (as continuous variables), age, age squared, and sex. In set 2, all the PROMIS‐GH10 questions as continuous variables, age, age squared, and sex were included. The set 3 of explanatory variables consisted of PROMIS‐GH10 questions as categorical variables (PROMIS‐GH10_items_cat), age, age squared, and sex; and for set 4, PROMIS‐GH10 questions, age (Age_cat), and sex (all as categorical variables) were considered. The age categories were defined based on Australian Bureau of Statistics (ABS) age categories (Australian Bureau of Statistics, 2017). Sets 1 and 2 were selected according to Revicki et al. (2009). Sets 3 and 4 directly included PROMIS‐GH10 items to take into account the ordinal nature of PROMIS‐GH10 responses (Revicki et al., 2009). In the estimation of GLM, the Modified Parks Test identified the family distribution of Poisson and log link for the EQ‐5D‐5L utilities (Manning & Mullahy, 2001). Results were reported with and without predicted utilities being truncated at one.

Indirect mapping

In indirect mapping, the responses to each EQ‐5D‐5L question were the dependent variables, then the predicted responses were combined to predict utilities. As each question was modeled separately, each mapping algorithm consisted of five separate models. One set of explanatory variables was considered in indirect mapping, including PROMIS‐GH10 questions as categorical variables, age, and sex. In indirect mapping, as the dependent variables are categorical variables with discrete outcomes, one option would be the use of the ordered logit model (OLOGIT) to predict the probability of each response level. The OLOGIT has the advantage of accounting for the order of categorical responses to EQ‐5D‐5L questions. However, the OLOGIT relies on an assumption of proportional odds or parallel lines/slopes. It generates a set of binary response models for the different ordered categories, in which the intercepts are different, but the coefficients for the explanatory variables are the same. This leads to the cumulative probability curves for the different ordered categories having parallel slopes. If this assumption is violated, OLOGIT provides biased estimates. An alternative to OLOGIT is the multinomial logit model (MLOGIT). However, it does not consider the ordinal structure of the dependent variables. In this paper, the generalized logit model (GLOGIT) was chosen over MLOGIT or OLOGIT as while it considers the ordinal structure of the dependent variable; it is less restrictive in relaxing the parallel slopes assumption (Long & Freese, 2006). The conditional probability of an observation belonging to class m, (for m = 2–5) can be written as: where m denotes one of the five dimensions of EQ‐5D‐5L and is normalized as 1 for the reference category. GLOGIT generates several equations, each of them being a binary logistic regression that compares that group with a reference group, and each of them yields a probability that the observation falls into that category. Once these were obtained, individuals were assigned to one of the five levels using a Monte Carlo simulation approach where the predicted probabilities were compared to a random number from a uniform distribution. We ran 100 Monte Carlo simulations across the full sample. This approach is known to produce a more accurate distribution of responses in each dimension of EQ‐5D‐5L (Gray et al., 2006). Then the predicted responses were combined and utilities were calculated using the Australian EQ‐5D‐5L tariff (Gray et al., 2006; Long & Freese, 2006; Norman et al., 2017).

Machine learning techniques

Supervised machine learning techniques are primarily concerned with building predictive models that performs well in predicting outcomes for yet unseen data. An important feature of these techniques making them suitable for mapping is their ability to incorporate a large set of variables in a non‐linear pattern to improve prediction accuracy. We explored six supervised machine learning techniques to map from the PROMIS‐GH10 to the EQ‐5D‐5L, including CART, bagging, random forests, NN, QRNN and LASSO. The choice of techniques was based on the relative advantage of each technique. For all the machine learning techniques, except for LASSO, the explanatory variables were not prespecified. Instead, the explanatory variables included was decided by the machine learning technique from the set of all the potential explanatory variables in the data (big model), including PROMIS‐GH10 responses, age, and sex.

Classification and regression trees analysis (CART)

Generating a CART model involves selecting explanatory variables, and the split points on those variables, until an optimal tree is constructed. A tree is a prediction algorithm that splits the data at nodes and grows. At each node, the value of one of the explanatory variables (e.g., age >50 or age =<50) determines the next split. Classification trees and regression trees are adopted when the dependent variable is discrete and continuous, respectively. The selection of explanatory variables and the splits are chosen by minimizing a cost function. While in econometric techniques, the inclusion of explanatory variables (PROMIS‐GH10 questions) or their interactions are predefined, CART has the flexibility to include variables and their interactions automatically. For example, the interaction between pain intensity (question 10 of PROMIS‐GH10) and other PROMIS‐GH10 questions (physical function, fatigue, emotional distress, etc.) may impact the EQ‐5D‐5L utility values. In addition to accommodating interactions, CART produces algorithms that can readily be expressed and easily understood (Breiman et al., 1984), making it more favorable for mapping. In direct mapping using CART, the regression trees were generated for EQ‐5D‐5L utility values. The MSE between the observed and predicted utility values in each node was used to split the data and grow the tree. A range of restrictions was imposed on the tree construction such as the minimum number of observations in the node before the split (n = 10), complexity parameters (cp = 0.001), 10‐fold cross‐validation (xval = 10) and setting the “minisplit” and “maxdepth” at different numbers to control the size of the tree (Breiman et al., 1984). The tree construction was stopped when the cost of adding another split to the tree from the current node was above the value of the parameter cp. For indirect mapping using CART, classification trees were grown for all five dimensions of EQ‐5D‐5L. In growing classification trees, the Gini index was used as the splitting criterion (Breiman et al., 1984; Varian, 2014; Venables & Ripley, 2002). Similar to regression trees, the fully‐grown tree was pruned back to the point where cross‐validation error was minimized. The best sized regression and classification trees were chosen according to the smallest misclassification error within the estimation sample and smallest cross‐validation error. In case of classification trees, the predicted responses to each dimension of EQ‐5D‐5L were combined, and an Australian tariff applied to calculate utilities (Norman et al., 2017). An example of a classification tree is presented in detail in Appendix B.

Random forest and bagging (bagged CART)

The single tree generated by CART is highly susceptible to variance in data. There are ensemble approaches such as random forest and bagging that aim to minimize this variance in the prediction and thus improve predictive accuracy (Friedman et al., 2001). However, the lower variance comes with the cost of reduced interpretability, which makes it less desirable for a mapping exercise. The ensemble approaches were adopted in this study to compare the predictive accuracy of models, although they do not generate an algorithm. With these techniques a multitude of decision trees are generated and then aggregated to a single tree based on either the mode (for classification trees) or the mean prediction (for regression trees) of the individual trees (Strobl et al., 2009). Bagging improves variance by averaging the outcome from multiple fully‐grown trees on variants of the training data. This reduces the risk of overfitting and substantially improves predictive accuracy compared to a single decision tree (Breiman, 1996, 2001; Liaw & Wiener, 2002). The random forest technique is a modification of the bagging technique. It improves variance by reducing correlation between trees by allowing a selection of a random subset of the explanatory variables at each split to grow independent trees, overcoming the problem of tree correlation inherent in bagging (Boehmke & Greenwell, 2019). In direct mapping, random forests were developed for EQ‐5D‐5L utility values by splitting each node using a subset of explanatory variables (PROMIS‐GH10 responses, age, and sex) each time. This technique was used to generate 500 decision trees from the randomly selected subsets of the training dataset for each tree. As each tree is well‐fitted to a sub‐sample of data, the final random forests generated by aggregating these individual trees are expected to fit the whole dataset perfectly. Bagging is performed similarly, however when splitting a node the whole set of explanatory variables is considered. For indirect mapping using ensemble methods, the aggregated trees were generated for each dimension of EQ‐5D‐5L, and then the predicted values were combined, and an Australian tariff was applied to obtain utilities (Norman et al., 2017).

Neural networks

Another machine learning method adopted was NN. Although the black box nature of NN is not desirable for this study, they were chosen for their prediction superiority and the ability to perform with a relatively small dataset (Fausett, 1994; Shaikhina & Khovanova, 2017). Moreover, the ability of NN to learn hidden relationships in the data without imposing any fixed relationships makes it an excellent technique for prediction (Tu, 1996). To estimate utilities with NN, we used a series of multi‐layer perceptron feedforward NN, where the information flows from the input nodes (explanatory variables) through the hidden nodes (if any) to the output node (utilities). The model consisted of an input layer of PROMIS‐GH10 items, age and sex (12 nodes), different layers of hidden nodes, and one output node. With direct mapping the output was the EQ‐5D‐5L utilities, and with indirect mapping, the output was each dimension of EQ‐5D‐5L. We also adopted another NN‐based technique, QRNN, a mixed technique with the combined advantage of quantile regression and NN. This technique has the ability to model data with non‐homogeneous variances and can capture non‐linear patterns by using NN, thus advances the standard quantile regression (Cannon, 2011). Moreover, being more resistant to outliers, this technique allowed the predictions to preserve some aspects of the overall distribution of utilities. With this technique we used median regression NN, which was only adopted in direct mapping using PROMIS‐GH10 items, age, and sex as inputs nodes to predict EQ‐5D‐5L utilities (output).

Least absolute shrinkage and selection operator (LASSO)

We also included the machine learning technique, LASSO, because of its superiority in predicting utilities and model selection. Least absolute shrinkage and selection operator is a type of regression that uses the “shrinkage” technique by imposing a constraint on the parameters that cause regression coefficients for less important variables to shrink toward zero (Tibshirani, 1996). The remaining variables with non‐zero coefficients are most strongly associated with the dependent variables, thus enhancing the prediction accuracy and interpretability of the results while reducing the issue of overfitting with regression models. The variable selection feature of LASSO is desirable for mapping. However, in this study we have a relatively small number of explanatory variables. In other mapping exercises using a source measure with a higher number of items and levels a method superior in variable selection could be more beneficial. For the present analysis, we used LASSO for both prediction and variable selection. The former was used as an additional machine learning technique for mapping and the latter was used to enhance the model performance when estimating the hybrid models (see Section 4.2.4). For direct mapping, LASSO was implemented with several model specifications and the Poisson model was found to perform the best (Park & Hastie, 2007). For prediction with LASSO, two model specifications were considered. The first included only PROMIS‐GH10 items, age, age squared, sex, and the second additionally included all two‐way interactions of these variables. The training data was used to estimate the model parameters and then the best model was selected based on the smallest out‐of‐sample MSE. Similar steps were followed to estimate LASSO in the indirect mapping, with the binomial model chosen to predict each dimension of EQ‐5D‐5L. However, due to computational difficulties, only one set of variables without their interaction were reported in this case.

RESULTS

Descriptive statistics

The sample used to map from PROMIS‐GH10 to the EQ‐5D‐5L comprised of 2015 Australian respondents who completed both instruments. Table 1 provides the sample descriptive statistics.
TABLE 1

Descriptive statistics

VariablesGeneral population survey
Age (years)
Mean (SD)48.31 (17.79)
Range18–89
Female (%)53.40%
EQ‐5D‐5L utilities
Mean (SD)0.82 (0.25)
Range−0.43 to 1
Utilities <0 (%)38 (1.89%)
Utilities = 1 (%)440 (21.84%)
Utilities >0.9 (%)1120 (55.58%)
PROMIS‐GH10
Physical score (SD)14.21 (2.87)
Mental score (SD)13.22 (3.45)
No. of observations2015

Abbreviation: SD, standard deviation.

Descriptive statistics Abbreviation: SD, standard deviation. A high degree of overlap between the source and target measures contributes to more accurate mapping algorithms (Longworth & Rowen, 2013). The overlap between PROMIS‐GH10 questions and EQ‐5D‐5L dimensions and utilities were measured by their correlation, using Spearman's rank correlation coefficients (Zar, 1972). Moderately strong statistically significant correlations between EQ‐5D‐5L utilities and PROMIS physical (Spearman's rho (ρ) = −0.69, p = 0.00) and mental health scores (Spearman's rho (ρ) = −0.47, p = 0.00) were observed. These correlations are desirable as the accuracy of a mapping technique depends on the magnitude of overlap between the source and target measures (Longworth & Rowen, 2013).

Model performance

Table 2 presents the performance of all the econometric and machine learning techniques across a range of criteria. Figures 1, 2, 3 compares the distribution of predictions with the observed distribution for the econometric techniques, direct mapping using machine learning techniques, and indirect mapping using machine learning techniques, respectively. Each econometric model was estimated separately for the four sets of covariates described in Section 3.1. We first evaluate the performance of the two types of techniques individually and then compare the two types.
TABLE 2

Predicted statistics summary mapping PROMIS‐GH10 to EQ‐5D‐5L

ModelsMAEMSEMean (after truncation)MinimumMaximum (before truncation)Maximum (after truncation)% Of observations predicted >1 before truncation
Before truncationAfter truncationBefore truncationAfter truncation
Actual0.820901−0.42623011
Econometric models, direct mapping
Explanatory variable set 1
Linear regression0.1422460.1374320.0443540.0425430.8323340.2673561.19818915.11%
Tobit0.1314740.1314740.0404230.0404230.8297450.2033450.9645640.964564
Median regression0.1267320.1261440.0402560.0393320.8382880.1812221.07622315.26%
GLM0.1353230.1353230.0421670.0421670.8397600.3254650.9854760.985476
CLAD0.1394220.1364210.0445450.0425670.8353770.1732451.37753215.46%
Betamix0.1377800.1377800.0404650.0404650.8306320.1234340.9563230.956323
ALDVMM0.1353230.1353230.0382320.0382320.8300530.1113890.9681340.968134
Explanatory variable set 2
Linear regression0.1381000.1304770.0432100.0420050.8324420.2533221.14321215.06%
Tobit0.1262430.1262430.0429010.0429010.8336540.1965640.9731140.973114
Median regression0.1253250.1244660.0421320.0389650.8355640.1862311.03223115.11%
GLM0.1294450.1294450.0415430.0415430.8344120.2942230.9852340.985234
CLAD0.1351650.1293210.0445320.0413450.8341170.1655641.14455615.16%
Betamix0.1219430.1219430.0375530.0375530.8309870.1173260.9753440.975344
ALDVMM0.1203870.1203870.0368900.0368900.8299220.1167450.9771110.977111
Explanatory variable set 3
Linear regression0.1058610.1050610.0349740.0341950.819438−0.2836611.01301514.96%
Tobit0.1039230.1039230.0309120.0309120.817443−0.2434320.9860970.986097
Median regression0.1017340.0991220.0290410.0284550.829874−0.4101071.02077115.01%
GLM0.1065310.1065310.0313260.0313260.817477−0.2963540.9880650.988065
CLAD0.1088250.1070470.0355330.0334620.829588−0.3338901.03043215.31%
Betamix0.0966450.0966450.0265080.0265080.820799−0.3539800.9883950.988395
ALDVMM0.0958260.0958260.0258770.0258770.820902−0.3671030.9884650.988465
Explanatory variable set 4
Linear regression0.1098550.1074420.0363020.0354410.820402−0.2853321.03945815.01%
Tobit0.1053360.1053360.0332710.0332710.814437−0.2420880.9860980.986098
Median regression0.1039020.1013910.0314410.0301020.830179−0.3750631.02141615.21%
GLM0.1074010.1074010.0320520.0320520.816418−0.2870880.9880330.988033
CLAD0.1101840.1083710.0353360.0342980.829330−0.3181641.08940815.26%
Betamix0.1000660.1000660.0290440.0290440.819360−0.3556000.9880220.988022
ALDVMM0.9870120.9870120.0274210.0274210.819057−0.3662110.9881950.988195
Machine learning, direct mapping
CART (regression trees)0.1267560.1267560.0484330.0484330.812054−0.1113310.9812420.981242
Random forests0.1114180.1114180.0373710.0373710.818166−0.2024190.9980120.998012
Bagged CART0.1123390.1123390.0414460.0414460.817192−0.1962990.9910020.991002
NN0.1071950.1071950.0332780.0332780.818389−0.2452900.9928660.992866
QRNN0.1040270.1040270.0311900.0311900.819744−0.3008120.9975210.997521
LASSO 10.0955230.0955230.0253230.0253230.820901−0.3993450.9987330.998733
LASSO 20.1019390.1019390.0293390.0293390.810058−0.4329110.9649770.964977
Econometric models, indirect mapping
GLOGIT0.1070660.1070660.0292670.0292670.836044−0.28110811
Machine learning, indirect mapping
CART (classification trees)0.1182690.1182690.0414930.0414930.860133−0.19028611
Random forests0.1072510.1072510.0312790.0312790.843662−0.23507911
Bagged CART0.1114910.1114910.0324660.0324660.846118−0.22293111
NN0.1047290.1047290.0304220.0304220.831362−0.26045011
LASSO 10.1044190.1044190.0306800.0306800.830096−0.35521011

Note: Results were obtained from 10‐fold cross‐validation. Explanatory variables for set 1: the physical and mental health summary scores of PROMIS‐GH10 (as continuous variables), age, age squared, sex; set 2: the PROMIS‐GH10 items, age, age squared, sex; set 3: the PROMIS‐GH10 (as categorical variables), age, age squared, and sex; set 4: the PROMIS‐GH10, age, and sex all as categorical variables. LASSO 1: LASSO technique is used for prediction. Explanatory variables (without interactions) are only considered. LASSO 2: LASSO technique is used for prediction. Explanatory variables and their two‐way interactions are considered.

Abbreviations: ALDVMM, adjusted limited dependent variable mixture model; Betamix, mixture beta regression model; CLAD, censored least absolute deviation; GLM, generalized linear model; GLOGIT, generalized logistic regression; LASSO, least absolute shrinkage and selection operator; MAE, mean absolute error; MSE, mean squared error; NN, neural networks; PROMIS‐GH10, PROMIS short form Global Health 10; QRNN, quantile (median) regression neural networks.

FIGURE 1

Distribution of the observed versus predicted utilities using the econometric techniques. ALDVMM, adjusted limited dependent variable mixture model; Betamix, mixture beta regression model; CLAD, censored least absolute deviation; GLM, generalized linear model; GLOGIT, generalized logistic regression; MR, median regression

FIGURE 2

Distribution of the observed versus predicted utilities using direct mapping with machine learning techniques. LASSO 1: LASSO technique is used for prediction. Explanatory variables (without interactions) are only considered. LASSO 2: LASSO technique is used for prediction. Explanatory variables and their two‐way interactions are considered. LASSO, least absolute shrinkage and selection operator; NN, neural networks; QRNN, quantile (median) regression neural networks

FIGURE 3

Distribution of the observed versus predicted utilities using indirect mapping with machine learning techniques. LASSO 1: LASSO technique is used for prediction. Explanatory variables (without interactions) are only considered. LASSO, least absolute shrinkage and selection operator; NN, neural networks; QRNN, quantile (median) regression neural networks

Predicted statistics summary mapping PROMIS‐GH10 to EQ‐5D‐5L Note: Results were obtained from 10‐fold cross‐validation. Explanatory variables for set 1: the physical and mental health summary scores of PROMIS‐GH10 (as continuous variables), age, age squared, sex; set 2: the PROMIS‐GH10 items, age, age squared, sex; set 3: the PROMIS‐GH10 (as categorical variables), age, age squared, and sex; set 4: the PROMIS‐GH10, age, and sex all as categorical variables. LASSO 1: LASSO technique is used for prediction. Explanatory variables (without interactions) are only considered. LASSO 2: LASSO technique is used for prediction. Explanatory variables and their two‐way interactions are considered. Abbreviations: ALDVMM, adjusted limited dependent variable mixture model; Betamix, mixture beta regression model; CLAD, censored least absolute deviation; GLM, generalized linear model; GLOGIT, generalized logistic regression; LASSO, least absolute shrinkage and selection operator; MAE, mean absolute error; MSE, mean squared error; NN, neural networks; PROMIS‐GH10, PROMIS short form Global Health 10; QRNN, quantile (median) regression neural networks. Distribution of the observed versus predicted utilities using the econometric techniques. ALDVMM, adjusted limited dependent variable mixture model; Betamix, mixture beta regression model; CLAD, censored least absolute deviation; GLM, generalized linear model; GLOGIT, generalized logistic regression; MR, median regression Distribution of the observed versus predicted utilities using direct mapping with machine learning techniques. LASSO 1: LASSO technique is used for prediction. Explanatory variables (without interactions) are only considered. LASSO 2: LASSO technique is used for prediction. Explanatory variables and their two‐way interactions are considered. LASSO, least absolute shrinkage and selection operator; NN, neural networks; QRNN, quantile (median) regression neural networks Distribution of the observed versus predicted utilities using indirect mapping with machine learning techniques. LASSO 1: LASSO technique is used for prediction. Explanatory variables (without interactions) are only considered. LASSO, least absolute shrinkage and selection operator; NN, neural networks; QRNN, quantile (median) regression neural networks

Econometric techniques

Models using set 3 and set 4 consistently performed better than those using set 1 and set 2, which is expected since the ordinal nature of PROMIS‐GH10 responses were not considered in the latter two sets. Our comparison is therefore based on these two sets of results. Overall, models using set 3 performed better than those using set 4, suggesting a quadratic functional form fits better than the dummy coded age variable. This is also expected since the categorization of a continuous variable would often lose information. In estimation with ALDVMM we considered two and three component models and found the performance of the former to be superior to the latter. Assuming constant probabilities of component membership, no variables were included in the probabilities of component membership, this might have affected the performance of three component model. The convergence was not achieved when we tried to fit a four‐component ALDVMM. As expected, linear regression, median regression and CLAD overpredicted utilities (utilities >1) since they do not consider them being bounded. Our primary measure of predictive accuracy, the MAE (after truncation), was the lowest for mixture models (ALDVMM and Betamix) using set 3, with values of 0.095826 and 0.096645, respectively. This is consistent with the literature, which suggests their superiority over traditional econometric models for their high level of flexibility (Gray & Hernandez‐Alava, 2018; Hernandez‐Alava & Wailoo, 2015). The next best performing model was median regression using set 3, whose MAE (after truncation) was 0.099122. The remaining models using set 3 had similar MAEs (after truncation), ranging from 0.103923 to 0.107047. The indirect model GLOGIT performed rather poorly with a MAE of 0.107066. This performance ranking remained overall the same for MSE (after truncation) with the exception that the performance of GLOGIT was not the worst in this case. Specifically, of all the econometrics techniques, the two mixture models (Betamix and ALDVMM) using set 3 and 4 were the most accurate in predicting the observed mean, with ALDVMM predicting it closer to the observed mean (0.820902). GLOGIT had the poorest performance in predicting the observed mean. All the models performed poorly in terms of predicting the observed minimum utility of −0.426200, with the best performing ones being median regression using set 3 (−0.410107) and two mixture models using set 3 (−0.353980 and −0.367103), and the worst performing being Tobit (−0.242088). The models performed better in terms of predicting the observed maximum utility of one. Among the models whose predictions did not exceed the bound, as expected, the indirect mapping approach GLOGIT performed the best with a maximum utility of one. The others performed similarly, however; the maximum utility ranged from 0.986097 to 0.988465. The comparison between the sample distribution and the distribution of the predictions is more revealing (Figure 1). All the models, apart from the two mixture models, performed poorly toward the extremes. It is particularly interesting that, while median regression could be rated as good as the two mixture models based on Table 2, it is clearly inferior in fitting the different parts of the full distribution. Based on these comparisons, ALDVMM, closely followed by Betamix, was the best performing among the econometric techniques. The indirect model GLOGIT performed the worst, especially given its relatively poor mean utility prediction.

Machine learning

None of the machine learning techniques over predicted the utilities, thus the MAE before truncation and after truncation were identical. We used the LASSO technique to estimate a model without variables' interaction (LASSO 1), and a model with variables' interaction (LASSO 2). While LASSO 1 was used in both direct and indirect mapping, the inclusion of interactions was specific to direct mapping due to computational difficulty. The direct LASSO 1 model performed the best with a MAE of 0.095523 while regression trees the worst with a MAE of 0.126756. The best performing machine learning technique, direct LASSO 1, selected following variables to optimize the prediction: PROMIS‐GH10 questions 1 (general health), question 4 (mental health), question 5 (social activities), question 6 (physical activities), question 7 (pain), question 9 (social activities), question 10 (emotional problems), sex, age, and age squared. The inclusion of two‐way interactions in the direct LASSO 2 model, somewhat surprisingly, worsened the predictive performance and the MAE increased from 0.095523 to 0.101939. In comparison, PROMIS‐GH10 question 6 (physical activities), question 7 (pain), question 9 (social activities) and question 10 (emotional problems) had high importance in predicting utilities with the regression tree as most splits for growing the tree were based on responses to these questions. In classification trees, a set of different variables had high importance depending on the dimension. As an example, in predicting the depression and anxiety dimension, PROMIS‐GH10 questions 10 (emotional problems) and question 4 (mental health states) had a greater contribution. The predictive accuracy of regression and classification trees improved when random forests and bagging were applied. For regression trees, the MAE improved from 0.126756 to 0.111418 with random forests and to 0.112339 with bagging. For classification trees, the MAE improved from 0.118269 to 0.107251 with random forests and to 0.111491 with bagging. An example of classification tree prediction is presented in Appendix B. Direct mapping with NN further improved the MAE to 0.107195 and indirect mapping with NN resulted in the MAE of 0.104729. With QRNN, the MAE again improved to 0.104027. This performance ranking remained the same for MSE. The direct machine learning techniques were more accurate in predicting the observed mean compared to the indirect ones, and direct LASSO 1 was the most accurate in predicting 0.820901 exactly. Classification trees were the worst, predicting the mean as 0.860133. Both direct and indirect LASSO 1 and direct LASSO 2 performed well in terms of predicting the observed minimum utility of −0.426230, with the best performing ones being direct LASSO 2 (−0.432911) and direct LASSO 1 (−0.399345). Apart from direct LASSO 2, all the machine learning techniques performed well in predicting the maximum utilities, with the indirect techniques predicting the exact maximum utility value of one, and the direct techniques predicting the maximum in the range 0.981242–0.998733. Direct LASSO 2 predicted 0.9649770. The comparison between the sample distribution and the distribution of the predictions gives more insights on how these techniques performed. Figures 2 and 3 suggest that, except LASSO models, all direct techniques fitted the distribution poorly. The indirect approaches using CART, random forest, bagged CART, and NN fitted better than the direct approaches. Overall, the direct LASSO 1 seemed to dominate all the other machine learning techniques. The indirect LASSO 1 also performed the best among all the indirect techniques. CART techniques (classification trees and regression trees) appeared to perform the worst overall. The performance of indirect approaches in each dimension of EQ‐5D‐5L is presented in Appendix D.

Comparison of econometric and machine learning techniques

The direct LASSO 1 out‐performed the best performing econometric model (ALDVMM using set 3) for all criteria, although only by a relatively small margin. The former had slightly smaller MAE and MSE, with the minimum prediction closer to the observed and the maximum better (i.e., closer to one). Both were able to predict the observed mean and fitted the distribution similarly. It is worth mentioning that, within the indirect mapping approaches, LASSO 1 out‐performed the best performing econometric technique (GLOGIT).

Estimating the hybrid model

Overall, the LASSO and ALDVMM techniques out‐performed all the other machine learning and econometric techniques, respectively. However, the calculation of standard errors and variance‐covariance matrices with LASSO is not straightforward. Moreover, LASSO regularization excludes some variables to estimate a simpler model. The correlation between the selected variables and those excluded might lead to bias in the estimated coefficients (Ahrens et al., 2019; Barlin et al., 2013; Lee et al., 2016). Overcoming the limitation of LASSO and making use of the variable selection feature of this technique, we developed additional hybrid models (Hybrid 1 and Hybrid 2) wherein we improved model performance by combining machine learning and econometric techniques. Specifically, we first selected the variables using LASSO and then re‐estimated ALDVMM using these variables. However, choosing the variables using the whole sample and then re‐fitting the model can be problematic as only the significant variables are chosen, and the standard errors cannot be trusted (Lee et al., 2016; Mullainathan & Spiess, 2017). One way to address this issue is to divide the dataset into two sub‐samples and use one for the variable selection, and the other for estimation of the models (Zhao et al., 2017). Following this approach, we used half of the data for LASSO variable selection and then re‐fitted ALDVMM with the selected variables using the other half of the data. Also, we estimated the ALDVMM with explanatory variable set 3 and LASSO separately with the exact estimation and validation sample (50% of the sample) to be able to compare the results. For the Hybrid 1 model, we used the LASSO technique for variable selection among PROMIS‐GH10 items, age, and sex (without their two‐way interactions). For the Hybrid 2 model, we additionally included two‐way interactions for variable selection. The results presented in Table 3 suggests the Hybrid 1 model resulted in improved utility predictions in the extremes, with the MAE lower than the ALDVMM. The selected variables enabled ALDVMM to predict the minimum utility of −0.322932, an improvement of 0.016082 compared to −0.306850 predicted by the ALDVMM with set 3. Moreover, the superiority of mixture models in accommodating multimodality combined with the selected variables, resulted in better accuracy in predicting full health utilities. However, ALDVMM performed slightly better in predicting the exact mean. These results suggest LASSO's variable selection feature resulted in improving the performance of the ALDVMM in terms of MAE, minimum, and maximum utilities.
TABLE 3

Performance of hybrid models

ModelsMAERank in MAEMSERank in MSEMeanRank in meanMinimumRank in minimumMaximumRank in maximum
Hybrid 10.09612520.02631030.8264103−0.32293230.9987572
Hybrid 20.09894340.29842140.8158644−0.40573310.9795434
LASSO 1 a 0.09599310.02577310.8261591−0.34776520.9988311
LASSO 2 a 0.99518850.02954250.8106415−0.44975350.9695215
ALDVMM a 0.09634130.02605220.8263352−0.30685040.9883673
Actual observations in the validation sample (50% of dataset)0.826099−0.4262301

Note: Hybrid 1: explanatory variables (without interactions) are selected by LASSO and the ALDVMM is re‐estimated with the selected variables. Hybrid 2: explanatory variables (variables and their two‐way interactions) are selected by LASSO and the ALDVMM is re‐estimated with the selected variables.

Abbreviations: ALDVMM, adjusted limited dependent variable mixture model; LASSO, least absolute shrinkage and selection operator.

These three models are re‐estimated using 50% estimation and 50% validation sample to be comparable with Hybrid models, thus the statistics are different from ones previously reported in Table 2.

Performance of hybrid models Note: Hybrid 1: explanatory variables (without interactions) are selected by LASSO and the ALDVMM is re‐estimated with the selected variables. Hybrid 2: explanatory variables (variables and their two‐way interactions) are selected by LASSO and the ALDVMM is re‐estimated with the selected variables. Abbreviations: ALDVMM, adjusted limited dependent variable mixture model; LASSO, least absolute shrinkage and selection operator. These three models are re‐estimated using 50% estimation and 50% validation sample to be comparable with Hybrid models, thus the statistics are different from ones previously reported in Table 2. The hybrid model with the inclusion of interactions (Hybrid 2), on the other hand, did not improve the overall performance of the model. Although this model was superior in predicting the minimum utility, it was at the cost of a more inaccurate prediction of the mean and maximum utility. Overall, these results suggest that not only does direct LASSO one out‐perform all other models in prediction, but also utilizing LASSO's variable selection feature improved ALDVMM's predictive performance.

DISCUSSION

There has been increased interest in machine learning techniques in the health economics literature with the presumption they will out‐perform standard econometric techniques (Konig et al., 2013; Kreif et al., 2015; Schilling et al., 2017). However, there has been a realization that while econometric techniques can perform poorly regarding predicting complex and non‐linear relationships, they are easier to implement and are superior in explaining and interpreting those relationships. This has inspired the use of hybrid econometric‐machine learning techniques to predict and interpret complex relations (Boelaert & Ollion, 2018; Böheim & Stöllinger, 2021; Kauffman et al., 2017; Malhotra, 2021; Yu et al., 2007; Zheng et al., 2017). This paper explored the feasibility of using machine learning techniques and combining them with econometric methods as a valuable tool for mapping PROMs to MAUIs. We used machine learning techniques to map from PROMIS‐GH10 to EQ‐5D‐5L and compared their performance to the standard econometric techniques previously adopted in the literature. Both direct and indirect techniques of mapping were applied, and utilities were estimated for six machine learning techniques (CART, random forests, bagged CART, NN, QRNN, and LASSO) and eight econometric techniques (linear regression, Tobit, GLM, median regression, CLAD, Betamix, ALDVMM, and GLOGIT). The direct LASSO 1 model performed the best across the range of econometric and machine learning techniques, followed by ALDVMM with MAEs of 0.095523 and 0.095826, respectively. Similar to those observed in a previous study mapping PROMIS‐GH10 to EQ‐5D‐3L by Thompson et al. (2017) using a substantially larger sample (n = 13,955) with MAEs ranging between 0.069 and 0.144. CART techniques (classification trees and regression trees) were the worst performing machine learning techniques. Consistent with the literature, applying ensemble algorithms (Random Forest and Bagging) to them is essential as it increases prediction accuracy, although this improved performance comes at the cost of interpretability (Breiman, 1996, 2001; Friedman et al., 2001; Liaw & Wiener, 2002).The mapping literature has been dominated by the efforts of selecting optimal model specifications while less attention has been paid to variable selection. Our results suggest that the latter is equally important and should be considered in mapping exercises. Traditionally variables have been selected using a “cherry picking” approach or a “kitchen sink” approach, where the former is based on theory and the latter relies on the implicit variable selection through the coefficient values (Chen et al., 2019). The advantage of using machine learning techniques for variable selection has been emphasized in the literature (Athey & Imbens, 2019). However, the value of using LASSO for variable selection continues to be debated, with recent studies comparing the performance of several techniques reporting mixed results (Vasquez et al., 2016; Zou, 2006). While LASSO out‐performed the other techniques in prediction, the calculation of standard errors and variance‐covariance matrices is not straight forward for LASSO, like any other machine learning techniques. Consequently, if a researcher was interested in more than the deterministic results of a cost‐effectiveness analysis (e.g., probabilistic sensitivity analysis) then machine learning techniques could not be used to generate a mapping algorithm. Nevertheless, machine learning techniques' variable selection feature can be adopted to enhance econometric techniques. As it is examined in this study, combining this feature with the best performing econometric techniques resulted in a hybrid model with improved predictive performance in several criteria. The standard errors and variance‐covariance matrices for the hybrid model are easy to obtain. However, to address overfitting bias, and acquire reliable standard errors, the parametric models are required to be estimated on a different sample. Based on the performance of the hybrid models, we have proposed two algorithms to map from the PROMIS‐GH10 to EQ‐5D‐5L in Appendix C. One is based on ALDVMM with explanatory variable set 3, and the other one is based on the Hybrid 1 model which includes variables selected by LASSO technique and it is re‐estimated with ALDVMM. The corresponding variance‐covariance matrices are also presented in Appendix E. It should be noted that in our estimation, the LASSO variable selection was implemented for several model specifications, with Poisson performing the best for direct mapping and binomial for the indirect mapping. However, the ALDVMM model was not included in the comparison as such an algorithm has yet to be developed. Our hybrid model represents a pragmatic approach that combines the power of LASSO variable selection and the flexibility of ALDVMM model specification. Indeed, this approach improved the original ALDVMM (without variable selection) on almost every metric. Nevertheless, how to implement the LASSO variable selection within the ALDVMM model and whether this may further improve the predictive performance are interesting questions and should be explored in future research. In prediction with LASSO and the hybrid model, the inclusion of two‐way interactions led to worse predictive performance than the exclusion of the interactions. This may be due to the high correlations between predictors and the relatively small sample size (so the interactions cannot be precisely estimated). However, it should be noted that considering interactions in LASSO led to the best performance on predicting lower utilities (but very poor performance on predicting high utility values), suggesting that when the health is poor, the interaction may play a more important role in predicting the utilities. We adopted a sample splitting approach to obtain reliable standard errors to address regularization bias associated with the LASSO (Mullainathan & Spiess, 2017). However, there might be some concerns around the randomness associated with the method (i.e., different splits would yield different results). One possible way to resolve this issue is to perform multiple random splits and aggregate the information accordingly (Meinshausen et al., 2009). This should be explored in future studies. Given these limitations with machine learning techniques for variable selection in general, including LASSO, these techniques should be used and interpreted cautiously. However, we recommend a hybrid model can be regarded as a supplementary tool in mapping exercises to guide the variable selection and maximize predictive performance. While an advantage of a machine learning technique is its capability to learn and improve its performance (Breiman et al., 1984), model interpretability and explainability restrict its application to mapping. The “black box” nature of some of the machine learning techniques imposes a significant limitation on their adoption as there is no algorithm that can be reported for another researcher to use. However, as shown in this paper, certain machine learning techniques like LASSO alongside standard econometric mapping techniques can enhance predictions by improving variable selection. Moreover, the emerging field in machine learning of explainable Artificial Intelligence (AI) has demonstrated practical success in providing an insight into the “black box” (Holzinger et al., 2017). We believe research in explainable AI could facilitate the implementation of machine learning in patient reported data, and specifically in mapping. Moreover, some machine learning techniques can optimize a joint loss function comprised of different items without collapsing them into an overall score. Recent machine learning literature has attempted to address this by relaxing the hypothesis of the piecewise linear loss function in adopting multi‐task learning (Brault et al., 2019; Dosovitskiy & Djolonga, 2019; Shoshan et al., 2019; Wang et al., 2019). Brault et al. (2019) proposed Infinite Task Learning, which jointly solves parametrized tasks for a continuum of parameters. Dosovitskiy and Djolonga (2019) proposed an approach “you only train once (YOTO)”, which trains one models across the entire space of different loss weightings. However, evidence on the reliable performance of these models for relatively small datasets like ours is not sufficiently validated and should be explored in the future studies. While the advantage of using a machine learning technique is its capability to learn and improve its performance (Breiman et al., 1984), this capability is limited by the availability of data. Machine learning is data driven and usually requires a large dataset (optimally 75–100 observations per class) to work efficiently. In comparison, this study had 2015 observations, with some levels having less than 10 observations. This was more pronounced in lower utilities as only 2% of the respondents in the full sample reported negative utilities. While this is a smaller sample than some machine learning studies in other disciplines, ours is a substantial sample relative to other studies using machine learning in patient level health outcome research (Konig et al., 2013; Kreif et al., 2015; Schilling et al., 2017). With the likelihood of patient level datasets being relatively smaller in most future studies, we believe our analysis offer important insights for future studies aimed at evaluating a range of methodological techniques for mapping. Our analysis was based on a single case study of mapping from PROMIS‐GH10 to EQ‐5D‐5L. As with all mapping studies, there is uncertainty around the results and the differences in MAE, our primary measure of model performance. In fact, for a different dataset another model could perform better. Thus, future research applying machine learning to other data sets, involving different instruments, sample sizes, and types of respondents, would be needed to further validate our results.

CONCLUSION

This study makes two significant contributions to the literature. This is the first study to simultaneously consider a broad range of econometric and machine learning techniques for mapping and to compare their performance in predicting utilities. While most mapping literature has exclusively used econometric techniques that are parametric in nature and require some tweaking (e.g., truncation, stepwise regression) that can lead to biases. A key advantage of using machine learning techniques for mapping is that they overcome the need to prespecify the functional specifications of the models. This would be an advantage if the PROM had a high number of items and levels. Our approach of combining econometric and machine learning techniques brings new insights to the mapping literature. Future research on mapping patient outcome data would further validate our results for predictive accuracy of machine learning techniques and hybrid models for different datasets. The second contribution of this study is the development of two mapping algorithms to map from the PROMIS‐GH10 to the EQ‐5D‐5L.

CONFLICT OF INTEREST

All the authors declare that they have no conflict of interest.

AUTHOR CONTRIBUTION

Mona Aghdaee, Bonny Parkinson, Kompal Sinha, Yuanyuan Gu, Rajan Sharma, Emma Olin and Henry Cutler contributed to the conception and design of this mapping study. Mona Aghdaee conducted the statistical analysis. All the authors contributed to the interpretation of data; drafting the article, revising it critically for the intellectual content and final approval version to be published. Supporting Information S1 Click here for additional data file.
TABLE A1

Mapping onto preference‐based measures reporting Standards (MAPS) checklist

Section/topicItem no.RecommendationReported on page no.
Title and abstract
Title1Identify the report as a study mapping between outcome measures. State the source measure(s) and generic, preference‐based target measure(s) used in the study.1
Abstract2Provide a structured abstract including, as applicable: Objectives; methods, including data sources and their key characteristics, outcome measures used and estimation and validation strategies; results, including indicators of model performance; conclusions; and implications of key findings.1
Introduction
Study rationale3Describe the rationale for the mapping study in the context of the broader evidence base.2–4
Study objective4Specify the research question with reference to the source and target measures used and the disease or population context of the study.3–4
Methods
Estimation sample5Describe how the estimation sample was identified, why it was selected, the methods of recruitment and data collection, and its location(s) or setting(s).4
External validation sample6If an external validation sample was used, the rationale for selection, the methods of recruitment and data collection, and its location(s) or setting(s) should be described.NA
Source and target measures7Describe the source and target measures and the methods by which they were applied in the mapping study.4
Exploratory data analysis8Describe the methods used to assess the degree of conceptual overlap between the source and target measures.8
Missing data9State how much data were missing and how missing data were managed in the sample(s) used for the analyses.NA
Modeling approaches10Describe and justify the statistical model(s) used to develop the mapping algorithm.5–8
Estimation of predicted scores or utilities11Describe how predicted scores or utilities are estimated for each model specification.5–8
Validation methods12Describe and justify the methods used to validate the mapping algorithm.5–8
Measures of model performance13State and justify the measure(s) of model performance that determine the choice of the preferred model(s) and describe how these measures were estimated and applied.4
Results
Final sample size(s)14State the size of the estimation sample and any validation sample(s) used in the analyses (including both number of individuals and number of observations).8
Descriptive information15Describe the characteristics of individuals in the sample(s) (or refer back to previous publications giving such information). Provide summary scores for source and target measures, and summarize results of analyses used to assess overlap between the source and target measures.8–9
Model selection16State which model(s) is(are) preferred and justify why this(these) model(s) was(were) chosen.9–15
Model coefficients17Provide all model coefficients and standard errors for the selected model(s). Provide clear guidance on how a user can calculate utility scores based on the outputs of the selected model(s).Appendix C
Uncertainty18Report information that enables users to estimate standard errors around mean utility predictions and individual‐level variability.Appendix C&E
Model performance and face validity19Present results of model performance, such as measures of prediction accuracy and fit statistics for the selected model(s) in a table or in the text. Provide an assessment of face validity of the selected model(s).Tables 2 and 3
Discussion
Comparisons with previous studies20Report details of previously published studies developing mapping algorithms between the same source and target measures and describe differences between the algorithms, in terms of model performance, predictions and coefficients, if applicable.15–16
Study limitations21Outline the potential limitations of the mapping algorithm.16–17
Scope of applications22Outline the clinical and research settings in which the mapping algorithm could be used.15–17
Other
Additional information23Describe the source(s) of funding and non‐monetary support for the study, and the role of the funder(s) in its design, conduct and report. Report any conflicts of interest surrounding the roles of authors and funders.17

Abbreviation: NA, not applicable.

TABLE A2

Mapping to estimate health‐state utility from non‐preference‐based outcome measures: An ISPOR good practices for outcomes research task force report

RecommendationReported
1. Describe relevant differences between data sets that are candidates for mapping estimation.One only dataset was used, which was collected for the purpose of this mapping study.
2. Give full details of the selected data set. Describe how the study was run and patients were sampled. Provide baseline and follow up characteristics including the distribution of patients' disease severity. Missingness in the longitudinal pattern of responses should be described.How the study was conducted and patients sampled provided in Section 2 (Data), patient characteristics provided in Table 1.
Data was cross‐sectional with all questions mandatory, except for the Charlson comorbidity index (CCI), which was not used in the mapping study. Hence there was no missing data.
3. Plot the distribution of the utility data.Distribution of the observed versus predicted utilities presented in Figures 13.
4. Justify the type of model(s) selected with reference to the characteristics of the target utility distribution and the proposed use of the mapping function.Justification of models selected presented in Sections 3.3 and 3.4.
5. Compare the dimensions of health covered by the target utility instrument and those covered by the explanatory clinical measure(s).Description of instrument dimensions provided in Section 2. Spearman's rank correlation coefficients presented in Section 4.1.
6. Describe the approach to determining the final model. Include tests conducted and judgments made.Described in Section 3.2
7. Summary measures of fit are of limited value for the total sample. Provide information on fit conditional on disease severity as measured by the clinical outcome measure(s). A plot of mean predicted versus mean observed utility conditional on the clinical variable(s) should be included.A range of summary measures are presented in Table 2. Distribution of the observed versus predicted utilities presented in Figures 13.
8. Coefficient values, error term(s) distributions(s), variances, and covariances are required.Presented in Appendix C and E.
9. Provide an example predicted value for some sets of covariates. Consider providing a program that calculates predictions for user‐defined inputs.Examples of machine learning presented in Appendix B. Example of how to estimate predicted utility value presented in Appendix C.
10. Parameter uncertainty in a mapping regression should be reflected using standard methods for Probabilistic Sensitivity Analysis (PSA). Assessment of model suitability for use in cost‐effectiveness analysis should also consider the distribution of utility values for PSA, with particular focus on whether these lie outside the feasible utility range for the preference based measure (PBM).Table 2 presents the proportion of observations truncated at one.
11. When imputing data from a mapping function, individual‐level variability should be incorporated using simulation methods and information about the distribution of the error term(s). These simulated data can be compared with the raw observed data, including an assessment of the range of values compared with the feasible range for the PBM.Not applicable – no imputation conducted.
12. Re‐estimation of mapping results in a separate data set or other forms of validation are not routinely required.Due to the lack of data on the five‐level version of EQ‐5D‐5L, no external dataset was available, and only internal cross validation was applied in this study (mentioned in Section 3.2).

Note: Summary of reporting of mapping studies recommendations.

TABLE C1

Coefficients and standard errors from the best performing econometric model (Adjusted limited dependent variable mixture model [ALVDMM])

Predictor variablesComponent one coefficientsStandard errorsComponent two coefficientsStandard errors
PROMIS‐GH10 Q1
Level‐1−0.3859590.02072940.09251350.0794249
Level‐2−0.00093950.01116150.0187560.0642693
Level‐30.00275960.00953230.04020840.0565958
Level‐4−0.007480.00820350.03256710.0520424
PROMIS‐GH10 Q2
Level‐1−0.05851960.0139972−0.07949480.0714324
Level‐2−0.00706220.0093945−0.02910880.0569722
Level‐3−0.00660330.0082136−0.03205620.0509567
Level‐4−0.00587760.0072301−0.06380110.0457145
PROMIS‐GH10 Q3
Level‐1−0.04934120.0190513−0.02604970.0764026
Level‐2−0.00972790.01045870.0020960.0630417
Level‐30.00618630.00950750.06265430.0583958
Level‐40.0097190.00822860.12880120.0550993
PROMIS‐GH10 Q4
Level‐1−0.01923560.0109768−0.10732740.0579867
Level‐2−0.01560220.00767560.01672620.0484853
Level‐3−0.00122590.00650050.09831370.0424271
Level‐4−0.00385480.00568220.04725430.0368971
PROMIS‐GH10 Q5
Level‐1−0.00556370.0098399−0.03323790.059465
Level‐2−0.00882150.00811520.00073170.0532768
Level‐3−0.01048550.00741520.02629820.0486364
Level‐4−0.00856660.00666970.00762090.0428807
PROMIS‐GH10 Q6
Level‐10.57299490.051003−0.49782770.0734452
Level‐2−0.03210340.0131572−0.20482570.0383186
Level‐3−0.02954830.0063002−0.05735640.029585
Level‐4−0.01852590.0045397−0.05346560.0284769
PROMIS‐GH10 Q7
Level‐1−0.03594910.0058377−0.02861450.048754
Level‐2−0.04891330.006043−0.14834180.0453403
Level‐3−0.05578490.0062876−0.21939540.0464523
Level‐4−0.06482320.0073438−0.24317050.0518035
Level‐5−0.07371540.0080798−0.25601480.0444581
Level‐6−0.09571170.0088171−0.29822050.046128
Level‐7−0.13361540.0096407−0.35474390.04672
Level‐8−0.5836290.0197676−0.49058120.0504853
Level‐9−0.19278140.0229887−0.73418950.0704952
Level‐10−1.0993650.0245806−0.45529990.0904497
PROMIS‐GH10 Q8
Level‐1−0.03096810.0143363−0.136510.075945
Level‐2−0.03361670.0089216−0.21882790.0544924
Level‐3−0.01073560.0065049−0.1694070.0489464
Level‐4−0.00886740.0061089−0.11613970.0479181
PROMIS‐GH10 Q9
Level‐1−0.0135290.0160572−0.07302940.0648042
Level‐20.00157220.00876840.014000000.0536675
Level‐30.00131220.0072035−0.02793880.0471441
Level‐40.00174520.00635610.01809660.0431105
PROMIS‐GH10 Q10
Level‐1−0.01653440.0130433−0.28076130.0587058
Level‐2−0.04958420.0073591−0.22971490.0439222
Level‐3−0.03303470.0059642−0.03210680.0374949
Level‐4−0.01397510.00545850.01728280.0368554
Age0.00039560.00056820.00434470.0033483
Age squared−9.99E‐065.71E‐06−0.00006730.0000338
Female0.00095060.0035428−0.04469980.0201271
Constant0.97450760.01687710.975596100.0993999
Probability ‐Component 1
Constant0.12323670.0777262
/lns_1−3.2405710.0435271
/lns_2−1.4038760.0336056
sigma10.03914160.0017037
sigma20.24564290.008255

Note: PROMIS‐GH10 Q n = nth question of PROMIS‐GH10. The algorithm is based on ALVDMM set (3) that included PROMIS‐GH10 questions as items, age, age squared and sex (Female = 1) as explanatory variables. For PROMIS‐GH10 Q1, Q2, Q3, Q4, Q5, Q6, Q8, Q9, Q10 reference levels are level 5 and for PROMIS‐GH10 Q7 reference level is level 0.

TABLE C2

Coefficients and standard errors from the Hybrid 1

Predictor variablesComponent one coefficientsStandard errorsComponent two coefficientsStandard errors
PROMIS‐GH10 Q1
Level‐1−0.34330600.01483870.05140950.0570408
Level‐2−0.02935740.00904140.01846370.0470782
Level‐3−0.00937090.00756750.08066980.0432715
Level‐4−0.01379520.00733930.12197050.0431204
PROMIS‐GH10 Q4
Level‐1−0.02165850.0121445−0.05964290.0550486
Level‐2−0.02160420.00786420.04360020.0464942
Level‐3−0.00172830.00669080.11495950.0405404
Level‐4−0.00671620.00586490.05980390.034641
PROMIS‐GH10 Q5
Level‐1−0.01120220.0100856−1.18E‐025.52E‐02
Level‐2−0.01505830.00788660.02345420.0493677
Level‐3−0.01687380.00723860.05762170.0450475
Level‐4−0.01274150.0064780.01969110.0400461
PROMIS‐GH10 Q6
Level‐1−0.88754400.0476422−0.26754200.0653272
Level‐2−0.05030050.0135963−0.20633120.0361192
Level‐3−0.0325740.0069206−0.07373280.0281568
Level‐4−0.01939990.004604−0.05440850.0275175
PROMIS‐GH10 Q7
Level‐1−0.03737530.0060229−0.04544250.0457808
Level‐2−0.0506120.0061428−0.16308280.0425605
Level‐3−0.05831870.0063182−0.23452830.0439714
Level‐4−0.06795720.0074092−0.26940510.0491968
Level‐5−0.07634150.0080467−0.27726280.0416786
Level‐6−0.09773180.0086558−0.32937730.0435022
Level‐7−0.13418080.0097872−0.38761070.0447105
Level‐8−0.13935370.0265474−0.61608060.0480539
Level‐9−0.21734410.0232659−0.76658180.0676771
Level‐10−0.03437960.0450647−0.63292820.0739894
PROMIS‐GH10 Q9
Level‐1−0.01796590.0155147−0.09875040.0611147
Level‐2−0.00168610.0093691−0.03128160.050512
Level‐30.00666620.0072761−0.08935660.0449948
Level‐40.00395620.0062173−0.01719130.0399247
PROMIS‐GH10 Q10
Level‐1−0.03501410.01417−0.40818240.0529913
Level‐2−0.05616460.0071169−0.30298590.0403542
Level‐3−0.03605410.0060193−0.09125670.0349469
Level‐4−0.01500280.0055922−0.01849790.0343078
Age0.00042180.00058840.0036710.0032549
Age squared−0.000015.89E‐06−0.00005340.0000328
Female0.00014250.0036659−0.03958390.0194046
Constant0.99894500.01674560.986564210.0882295
Probability ‐Component 1
Constant0.08987110.0818972
/lns_1−3.2172580.0491279
/lns_2−1.4232680.0335654
sigma10.04006480.0019683
sigma20.24092550.0080868

Note: PROMIS‐GH10 Q n = nth question of PROMIS‐GH10. The algorithm is based on ALVDMM set (3) that included PROMIS‐GH10 questions as items, age, age squared and sex (Female = 1) as explanatory variables. For PROMIS‐GH10 Q1, Q4, Q5, Q6, Q9, Q10 reference levels are level 5 and for PROMIS‐GH10 Q7 reference level is level 0.

TABLE D1

Goodness of fit for indirect approaches

 MobilitySelf‐careUsual activityPain and discomfortAnxiety and depression
Indirect mapping approaches
Glogit66.35%75.96%71.83%85.30%54.87%
CART (classification trees)61.70%72.15%68.54%82.15%53.52%
Random forests66.67%75.92%72.53%85.92%55.87%
Bagging63.91%73.49%72.49%83.35%55.87%
NN68.08%76.53%73.24%86.35%57.75%
LASSO 169.48%76.85%73.24%86.38%58.22%

Note: The table presents the percentage of correctly predicted for each dimension of EQ‐5D‐5L. LASSO 1, LASSO technique is used for prediction. Explanatory variables (without interactions) are only considered.

Abbreviations: GLOGIT, generalized logistic regression; LASSO, least absolute shrinkage and selection operator; NN, neural networks.

  46 in total

1.  Mapping PROMIS Global Health Items to EuroQol (EQ-5D) Utility Scores Using Linear and Equipercentile Equating.

Authors:  Nicolas R Thompson; Brittany R Lapin; Irene L Katzan
Journal:  Pharmacoeconomics       Date:  2017-11       Impact factor: 4.981

2.  Tails from the peak district: adjusted limited dependent variable mixture models of EQ-5D questionnaire health state utility values.

Authors:  Mónica Hernández Alava; Allan J Wailoo; Roberta Ara
Journal:  Value Health       Date:  2012-03-23       Impact factor: 5.725

Review 3.  A review of studies mapping (or cross walking) non-preference based measures of health to generic preference-based measures.

Authors:  John E Brazier; Yaling Yang; Aki Tsuchiya; Donna Louise Rowen
Journal:  Eur J Health Econ       Date:  2009-07-08

4.  Mapping analyses to estimate health utilities based on responses to the OM8-30 Otitis Media Questionnaire.

Authors:  Helen Dakin; Stavros Petrou; Mark Haggard; Sarah Benge; Ian Williamson
Journal:  Qual Life Res       Date:  2009-11-26       Impact factor: 4.147

5.  The Patient-Reported Outcomes Measurement Information System (PROMIS) developed and tested its first wave of adult self-reported health outcome item banks: 2005-2008.

Authors:  David Cella; William Riley; Arthur Stone; Nan Rothrock; Bryce Reeve; Susan Yount; Dagmar Amtmann; Rita Bode; Daniel Buysse; Seung Choi; Karon Cook; Robert Devellis; Darren DeWalt; James F Fries; Richard Gershon; Elizabeth A Hahn; Jin-Shei Lai; Paul Pilkonis; Dennis Revicki; Matthias Rose; Kevin Weinfurt; Ron Hays
Journal:  J Clin Epidemiol       Date:  2010-08-04       Impact factor: 6.437

6.  Alternative evaluation metrics for risk adjustment methods.

Authors:  Sungchul Park; Anirban Basu
Journal:  Health Econ       Date:  2018-03-26       Impact factor: 3.046

7.  Use of a self-report-generated Charlson Comorbidity Index for predicting mortality.

Authors:  Saima Chaudhry; Lei Jin; David Meltzer
Journal:  Med Care       Date:  2005-06       Impact factor: 2.983

8.  An introduction to recursive partitioning: rationale, application, and characteristics of classification and regression trees, bagging, and random forests.

Authors:  Carolin Strobl; James Malley; Gerhard Tutz
Journal:  Psychol Methods       Date:  2009-12

9.  Mapping Functions in Health-Related Quality of Life: Mapping from Two Cancer-Specific Health-Related Quality-of-Life Instruments to EQ-5D-3L.

Authors:  Tracey A Young; Clara Mukuria; Donna Rowen; John E Brazier; Louise Longworth
Journal:  Med Decis Making       Date:  2015-05-21       Impact factor: 2.583

10.  Predicting EuroQol (EQ-5D) scores from the patient-reported outcomes measurement information system (PROMIS) global items and domain item banks in a United States sample.

Authors:  Dennis A Revicki; Ariane K Kawata; Neesha Harnam; Wen-Hung Chen; Ron D Hays; David Cella
Journal:  Qual Life Res       Date:  2009-05-27       Impact factor: 4.147

View more
  1 in total

1.  An examination of machine learning to map non-preference based patient reported outcome measures to health state utility values.

Authors:  Mona Aghdaee; Bonny Parkinson; Kompal Sinha; Yuanyuan Gu; Rajan Sharma; Emma Olin; Henry Cutler
Journal:  Health Econ       Date:  2022-06-15       Impact factor: 2.395

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.