Literature DB >> 26809812

Multiple imputation of multiple multi-item scales when a full imputation model is infeasible.

Catrin O Plumpton1, Tim Morris2,3, Dyfrig A Hughes4, Ian R White5.   

Abstract

BACKGROUND: Missing data in a large scale survey presents major challenges. We focus on performing multiple imputation by chained equations when data contain multiple incomplete multi-item scales. Recent authors have proposed imputing such data at the level of the individual item, but this can lead to infeasibly large imputation models.
METHODS: We use data gathered from a large multinational survey, where analysis uses separate logistic regression models in each of nine country-specific data sets. In these data, applying multiple imputation by chained equations to the individual scale items is computationally infeasible. We propose an adaptation of multiple imputation by chained equations which imputes the individual scale items but reduces the number of variables in the imputation models by replacing most scale items with scale summary scores. We evaluate the feasibility of the proposed approach and compare it with a complete case analysis. We perform a simulation study to compare the proposed method with alternative approaches: we do this in a simplified setting to allow comparison with the full imputation model.
RESULTS: For the case study, the proposed approach reduces the size of the prediction models from 134 predictors to a maximum of 72 and makes multiple imputation by chained equations computationally feasible. Distributions of imputed data are seen to be consistent with observed data. Results from the regression analysis with multiple imputation are similar to, but more precise than, results for complete case analysis; for the same regression models a 39% reduction in the standard error is observed. The simulation shows that our proposed method can perform comparably against the alternatives.
CONCLUSIONS: By substantially reducing imputation model sizes, our adaptation makes multiple imputation feasible for large scale survey data with multiple multi-item scales. For the data considered, analysis of the multiply imputed data shows greater power and efficiency than complete case analysis. The adaptation of multiple imputation makes better use of available data and can yield substantively different results from simpler techniques.

Entities:  

Mesh:

Substances:

Year:  2016        PMID: 26809812      PMCID: PMC4727289          DOI: 10.1186/s13104-016-1853-5

Source DB:  PubMed          Journal:  BMC Res Notes        ISSN: 1756-0500


Background

Missing data is ubiquitous in research, and survey data is particularly prone to incomplete responses. One problem arising from missing data is a loss of precision and statistical power. However, poor handling of the missing data during analysis can lead to biased results. When handling missing data, assumptions must be made about the mechanism of missingness; no analysis with missing data is free of such assumptions. Data may be missing completely at random (MCAR), where the probability of missing data is not dependent on either the observed or unobserved data. When data is missing at random (MAR), the probability of the data being missing does not depend upon the unobserved data, but missingness may be related to the observed data. Alternatively, data may be missing not at random (MNAR), whereby missingness is dependent upon the values of the unobserved data, conditional on the observed data [1-3]. Roth, in 1994, stated that despite its importance, conspicuously little research on missing data analysis appeared within the social sciences literature [4]. It is acknowledged that a gap still exists between techniques recommended by methodological literature and those employed in practice; traditional ad-hoc techniques such as deletion and single imputation techniques are still applied routinely [3, 5, 6]. Complete case (CC) analysis is commonly used, and is efficient and valid under MAR, provided missing data occurs only in the outcome. Once missing data occurs in covariates, or in parts of a composite outcome, complete case analyses are inefficient. Also, when the MCAR assumption does not hold, the data is no longer representative of the target population, compromising external generalisability [7]. Modern missing data methodologies include maximum-likelihood estimation (MLE) methods such as expectation–maximisation (EM) and multiple imputation (MI), both recommended for data which is MAR [3]. MI has been shown to be robust under departures from normality, in cases of low sample size, and when the proportion of missing data is high [2]. With complete outcome variables, MI is typically less computationally expensive than MLE, and MLE tends to be problem-specific with a different model being required for each analysis [8]. Whilst many theoretical works suggest MI to be an appropriate method, it has only recently been widely applied in practice [9]. Reviews on handling missing data across different fields indicate that it is relatively rare that missing data, and how it is handled, are reported explicitly: in cost-effectiveness analysis 22 % of studies did not explicitly report missing data [10]; in education the corresponding figure is 31 % [11]; in cohort studies 16 % of studies did not report how much data was missing whilst 14 % of studies did not report how missingness was handled [12]; in epidemiology 46 % of studies were unclear about the type of missing data [13]; and in applied education and psychology 66 % of studies where the presence of missing data could be inferred did not mention missing data explicitly [6]. A review of randomised controlled trials identified 77 articles from the latter half of 2013, of which 73 reported missing data. Of these articles, 45 % performed complete case analysis, 27 % performed simple imputation (linear interpolation, worst case imputation or last observation carried forward) and only 8 % used multiple imputation [14]. Whilst MI and MLE are gaining popularity, ad-hoc techniques still appear in the applied literature, with complete case analysis remaining as the most popular approach. Large scale survey data presents a number of challenges to imputation: a high number of variables; complexity of the data set; categorical (non-Normal) variables; categories with low observed frequency (sparsity in responses); questions which are conditional upon previous responses; and multiple multi-item scales, which are summed (either directly or weighted) during analysis. Such challenges reduce the use of sophisticated imputation techniques. As missing data in a single item of a multi-item scale leads to a missing total, the rate of missing data in scale totals can be very high. Imputing at the level of scale total whilst ignoring individual items may therefore introduce unnecessary bias. The widely-used EQ 5D-3L is one such scale, consisting of 5 items. A recent study considered imputing at item level rather than imputing scale totals [15]. When the pattern of missingness tended towards all items being missing for a respondent, little difference was seen between methods. When the pattern of missingness tended towards individual items being missing, for sample sizes of n > 100, imputing at item level was shown to be more accurate. Another study proposed methods for handling multi-item scales at the item score level [16], and further emphasised how mean imputation or single imputation leads to bias and underestimation of standard errors. The study concludes that missing data should be handled by applying multiple imputation to the individual items. However, the size and complexity of large survey data can cause complete MI prediction models to fail to converge when the model is specified at item level, rendering the ideal method computationally infeasible. The present study aims to develop an imputation method which addresses the challenges presented by large scale survey data, reducing the size of the prediction model whilst allowing for item level imputation. A simulation study presents a comparison of our proposed method with alternative imputation approaches, and the proposed method is illustrated further using data from a large multinational survey as a case study.

Methods

Multiple imputation by chained equations

Multiple imputation for a single incomplete variable works by constructing an imputation model relating the incomplete variable to other variables and drawing from the posterior predictive distribution of the missing data conditional on the observed data [1]. The approach allows for uncertainty in the missing data values by introducing variability in the imputed items. To handle multiple incomplete variables we use multiple imputation by chained equations (MICE) which allows different variable types (continuous, nominal, ordered categorical) to be handled within the same data set [1]. In MICE, variables are initially ordered by level of missingness. Missing values are initially replaced for each variable, for example by drawing at random from the observed values of that variable. Imputation is then conducted on each variable sequentially using the observed and currently imputed values of all other variables in the imputation model. In order to stabilise, this imputation step (known as a cycle) is repeated (typically 10 times) to produce one imputed data set. The process is repeated until the desired number of imputed data sets is reached [1, 17].

Imputation using subscale totals

Often, survey data contains responses to multiple multi-item scales. Imputing every item individually may lead to an unwieldy imputation model, which in extreme cases may fail to converge. In order to reduce the size of the imputation models yet retain item level imputation (and not discard data), we propose to impute responses to individual scale items, using the scale totals within prediction equations. In addition, when imputing responses to an item which forms part of a multiple multi-item scale, responses to other items from the scale should also be included. As a simple example, suppose we have primary outcome measure p, n demographic variables (d… d), a multi-item scale S made up of 7 items (s… s), and a multi-item scale T made up of 17 items (t… t). The forms of the imputation models are: d is imputed using the observed and current imputed values of p, d… d, s and t, where s and t are the summed scale scores of S and T. s is imputed using the observed and current imputed values of p, d… d, s… s and t. s is imputed using the observed and current imputed values of p, d… d, s1, s… s, and t. t is imputed using the observed and current imputed values of p, d… d, s and t… t. t is imputed using the observed and current imputed values of p, d… d, s, t and t… t. with similar imputation models for d… d, s… s and t… t. The proposed approach condenses information from other scales to reduce the number of predictors in each equation. Subscale totals are recalculated after each cycle of imputation.

Categorical variables

Survey data typically contains categorical variables, which may be either nominal or ordered. Ordered categorical variables, often in the form of Likert scales, can be imputed using ordinal logistic regression, whilst nominal categorical variables may be imputed using multinomial logistic regression. Sparsity may cause non-convergence errors during multinomial logistic regression, a recognised problem [1, 18]. On occasion, this may require response categories to be collapsed prior to imputation.

Conditional imputation

Survey design may contain some conditional questions. For example a question on experience of a specific drug will only be relevant to someone who has taken it. Within the statistical package, Stata, multiple imputation has options for conditional imputation within the -ice- routine [19]. Responses to the second part of the question are only imputed, given a certain answer to the first part of the question.

Analysing multiply imputed data

During analysis, each of the M imputed data sets are analysed individually. Imputation-specific coefficients are then pooled using Rubin’s rules, to produce a single result [20]. Rubin’s rules allow the incorporation of both within imputation variance (accounting for uncertainty if the data were complete), and between imputation variance (accounting for uncertainty about the missing data) [1].

Case study

Our data comes from an online survey, designed to investigate associations between putative predictors of adherence to antihypertensive medication, and patients’ self-reported adherence. Detailed methods of the survey and the main findings are published elsewhere [21]. Briefly, cross-sectional survey data from 2595 respondents from nine European countries (Poland, Wales, England, Hungary, Austria, Germany, Greece, the Netherlands and Belgium) was collected using the online tool SurveyMonkey®. The target population was adult hypertensive patients who have been prescribed antihypertensive medication. The survey comprised 13 validated measures from health psychology and behavioural economics, alongside demographic questions, resulting in a total of 135 questions. Within the health psychology sections, responses to several questions were summed to form subscale totals, as per validated approaches to analysing these measures. There are a total of 14 such subscales within the survey. Due to the length of the survey, a level of missing data was to be expected, with respondents dropping out part way through or skipping one or more question. We ensured no missingness in the primary outcome measure, the Morisky measure of adherence [22], by enabling ‘forced answer’ settings within SurveyMonkey. Figure 1 presents the percentage of complete responses by question, and in the order the questions were asked. A dip is seen at the open ended time preference measure, which may be perceived as cognitively challenging [23]. The sensitivity of information requested on income explains the final dip in the plot. Missing data was assumed to be MAR. We consider the impact of possible departures from MAR in the discussion.
Fig. 1

Responses rate by question and country. Questions not-applicable due differences in healthcare systems appear as breaks in the plot. MARS Medication Adherence Rating Scale, LOTR Life Orientation Test Revised, BMQ Beliefs about Medicines Questionnaire, TPB Theory of Planned Behaviour, EUROPEP European Task Force on Patient Evaluations of General Practice, BRIGHT Building Research Initiative Group Illness Management and Adherence in Transplantation, BIPQ Brief Illness Perception Questionnaire Reprinted from Value in Health, 18(2), Morrison VL, Holmes EAF, Parveen S, Plumpton CO, Clyne W, De Geest S, Dobbels F, Vrijens B, Kardas P, Hughes DA, Predictors of Self-Reported Adherence to Antihypertensive Medicines: A Multinational, Cross-Sectional Survey, 206–216, Copyright (2015), with permission from Elsevier [21]

Responses rate by question and country. Questions not-applicable due differences in healthcare systems appear as breaks in the plot. MARS Medication Adherence Rating Scale, LOTR Life Orientation Test Revised, BMQ Beliefs about Medicines Questionnaire, TPB Theory of Planned Behaviour, EUROPEP European Task Force on Patient Evaluations of General Practice, BRIGHT Building Research Initiative Group Illness Management and Adherence in Transplantation, BIPQ Brief Illness Perception Questionnaire Reprinted from Value in Health, 18(2), Morrison VL, Holmes EAF, Parveen S, Plumpton CO, Clyne W, De Geest S, Dobbels F, Vrijens B, Kardas P, Hughes DA, Predictors of Self-Reported Adherence to Antihypertensive Medicines: A Multinational, Cross-Sectional Survey, 206–216, Copyright (2015), with permission from Elsevier [21] We chose to impute each country-specific data set separately, as associations between variables were expected to differ between countries. A complete MI prediction equation results in 134 predictors for each incomplete variable. Some categorical variables had categories with low observed frequency which presented additional challenges. These were handled by collapsing response categories. Taking education as an example, in Greece, 52.3 % received primary education as a highest educational attainment, 29.0 % secondary education and 18.7 % higher education. For England the corresponding figures were 0.3, 33.7 and 65.3 % respectively. We collapsed the lower two categories, conducting the final analysis on ‘up to secondary education’ and ‘higher education’. Collapsing of categories was applied to all data sets, and was maintained for analysis. Within the income section of the survey, questions had an ‘opt out’ response if respondents were unwilling to provide the information. Additional file 1: Table S1 summarises these responses across the nine countries. Questions in this section took an ordered categorical format, which we were keen to preserve (rather than impute as nominal variables). This was achieved by generating two separate variables for each income item. An initial binary variable reflected whether the respondent was willing to provide a response. An ordinal variable then reflected the response, conditional on the respondent being willing to provide the information. The number of imputations, M, was chosen based upon analysis of the Polish data set, which was received 3 months prior to data from other countries. For this data set 26 % of response values were missing, so the number of imputations was set at M = 25, closely reflecting the suggestion of one imputation per percent missing data [1]. For subsequent country-level data sets the amount of missing data was in fact lower, 5–22 %, but M = 25 was maintained for consistency. Initially a full imputation model was attempted for this data, but failed to converge for all imputations. Applying our proposed approach, model size depended on the number of items within each subscale (range, 56–72). Each prediction equation included the demographic variables and the primary outcome measure. MI was conducted using the -ice- routine in Stata 10 [19, 24, 25]. Categorical variables were handled using -mlogit- and -ologit- [25]. Subscale totals were calculated following each cycle of the imputation using the passive option of the -ice- routine. The final imputation methods and models used to impute different parts of the survey are summarised in Table 1, with an extract of Stata code provided in Additional file 2: Appendix S1.
Table 1

Missing data by country, and how it was handled in imputation models

Variables% incomplete (for scale variables, % of missing responses to scale items, not scale totals)Imputed asUsed in imputation models as
AustriaBelgiumEnglandGermanyGreeceHungaryNetherlandsPolandWales
Total N323180323274289323237323323
Gender0000000001 binary itemSingle item
Age0000000001 continuous itemSingle item
Education3010210001 categorical itemSingle item
Marital status2001110101 binary itemSingle item
Employment2001110201 binary itemSingle item
Number of
 Medical conditions1301100301 continuous itemSingle item
 Medicines1102112611 continuous itemSingle item
 Tablets33131131211 continuous itemSingle item
 Items prescribed96331872661 continuous itemSingle item
Dosage frequency1200101001 ordered categorical itemSingle item
Morisky adherence0000000004 binary itemsBinary item
Medication adherence rating scale2312121405 ordinal itemsScale
Prescription payment120111*1*1 categorical itemSingle item
Affordability problem13211310*1 binary itemSingle item
Cost coping strategies278321*8*6 ordered categorical items6 single items
Income
 Source112322281283746151 categorical itemSingle item
 Perception82520281273746142 items: ordered categorical item conditional on binary item2 items
 Ease of borrowing92320281273848142 items: ordered categorical item conditional on binary item2 items
 Total92421281263846142 items: ordered categorical item conditional on binary item2 items
Health status1100001001 ordered categorical itemSingle item
Practitioner
 Type7161723672932101 categorical itemSingle item
 Gender101718221462938121 binary itemSingle item
Satisfaction with
 Practitioner11181828233035717 ordered categorical itemsScale
 Practice11231823633034116 ordered categorical itemsScale
Optimism6151620522626106 ordered categorical itemsScale
Illness perception questionnaire: analysed as 8 individual items9191925743137118 ordered categorical itemsScale
Necessities6151620522727105 ordered categorical itemsScale
Medicine concerns6151620522727106 ordered categorical itemsScale
Attitude9171624743035107 ordered categorical itemsScale
Barriers (theory of planned behaviour)10171623733033111 ordered categorical itemSingle item
Facilitators11181624853034113 ordered categorical itemsScale
Intention10181724843033102 ordered categorical itemsScale
Self efficacy711623633031102 ordered categorical itemsScale
Normative beliefs10201724743033113 ordered categorical itemsScale
Barriers212222381863535915 ordered categorical itemsScale
Social support101919232043143117 ordered categorical itemsScale
Time preference142523237213234194 ordered categorical items4 single items
Discrete choice experiment: not included in analysis25762171229 binary items9 single items

Unless stated, variables correspond to a single predictor during analysis. Due to differences in healthcare and prescription systems between countries, not all questions applied to each country. Additionally, in Wales, one question from the barriers scale was not applicable, thus this scale has only 14 items. Whilst illness perception questions were imputed as scale items, they were analysed individually

* Variables not analysed due to differences in prescription policies

Missing data by country, and how it was handled in imputation models Unless stated, variables correspond to a single predictor during analysis. Due to differences in healthcare and prescription systems between countries, not all questions applied to each country. Additionally, in Wales, one question from the barriers scale was not applicable, thus this scale has only 14 items. Whilst illness perception questions were imputed as scale items, they were analysed individually * Variables not analysed due to differences in prescription policies

Data analysis

Primary analysis was to be conducted by country, and the survey was powered as such. The primary analysis was a logistic regression with Morisky score as outcome, aiming to identify predictors of non-adherence to medication. There were deemed to be too many predictors to enter into the model, N = 42 (Table 1), therefore an initial variable selection step was employed. For the regression results presented here, we have used the same pragmatic variable selection as in the main analysis [21]: continuous variables were selected using univariate tests, pooled using Rubin’s rules; categorical variables were selected using Chi squared tests and ANOVA on complete case data; and variables relating to numbers of medicines were selected using t-tests controlling for age on complete case data. Variables showing univariate significance with the outcome measure were entered into the regression model. We also compared variable selection using complete case data with variable selection procedure using Rubin’s rules in the pooled MI data, using unadjusted or age-adjusted analyses as described above [20, 26].

Simulation

We devised a simulation study to impartially assess the performance of the new method against some alternatives in a realistic setting—based on the case study. We invoked a simpler set up than the case study, to allow comparison of the proposed strategy with a full imputation model, which is not possible on the full data set. The variables included were Morisky score (fully observed), age in years (fully observed), attitude (partially observed, the sum of seven items, scored as integers from 1 to 5) and practitioner satisfaction (partially observed, the sum of 17 items, also integers from 1 to 5). We estimated four quantities: the means of attitude and practitioner satisfaction, and their coefficients in a logistic regression of Morisky score on age, attitude and practitioner satisfaction.

Simulation procedure

The simulation procedure was as follows for 1000 replications: 323 observations of the 26 variables (Morisky score, age, 7 items of attitude, 17 items of practitioner satisfaction) were simulated from a multivariate normal distribution based on the observed vectors of means and standard deviations and the observed correlation matrix. Morisky score was rounded to the nearest of 0 or 1. Items making up attitude and practitioner satisfaction were rounded to take values of 1–5. Missing values were introduced for items of the attitude and practitioner satisfaction scales. The probability of missing data depended on Moriskyscore and age, based on the real dataset (MAR). Each observation was assigned to one of three categories: all items observed, some items observed, or no items observed. Three scenarios are simulated: Base case: 35 % had all items missing for a scale; 8 % had one or two items missing. More incomplete observations with partial data: 18 % had all items missing for a scale; 25 % had one or two items missing. Fewer observations with complete data: 55 % had all items missing for a scale; 15 % had just one or two missing. For each simulated dataset six methods were considered for dealing with the missing data, presented in Table 2. Ten imputations were used for all MI-based approaches.
Table 2

Summary of methods compared in simulation

Method descriptionAssumptionsComments
1. Exclude observations with any missing data from the analysisMissing values are independent of Morisky score given the other variablesComplete case analysis
2. For partially observed scales, sum the observed values, weighted by (1/proportion of items observed). Exclude observations with completely missing scalesPartially observed items are MAR given other items in the scale and completely missing scales are MCAREffectively single imputation as the mean of observed items within a scale
3. For partially observed scales, set the score to missing. Multiply impute the scale sums from a multivariate normal model with Morisky score and age as covariatesMissingness is MAR, and this process is the same for missing scales or missing items within scalesWasteful of observed data
4. For partially observed scales, sum the observed values, weighted by (1/proportion of items observed). For completely missing scales, multiply impute the scale sums from a multivariate normal model with Morisky score and age as covariatesCompletely missing scales are MARUses single imputation in the same way as approach 2
5. Multiply impute missing items based on the total of the other scale, and the other items within the scale for the item being imputed (with Morisky score and age as covariates). This requires the use of chained equations with linear regression imputation rather than a multivariate normal modelMissing at random for both variables, but that the regression for one item on the other scale items is the same as the regression on the other scale totalProposed adaptation
6. Multiply impute all items using all other items via a multivariate normal model, including Morisky score and age as covariatesMultivariate normalityIt is in some senses the benchmark
Summary of methods compared in simulation

Outcomes

For each parameter of interest we summarise percent bias (compared to analysis of complete data), coverage, and efficiency (through the empirical standard error, expressed by comparison to method 1) over the 1000 replications for that scenario. Estimates are accompanied by Monte-Carlo 95 % confidence intervals.

Results

To compare the performance and fit of the MI models, we plot complete case data versus imputed data, overall and by imputation. Figures 2 and 3 illustrate such comparisons for the individual item and scale total which displayed the highest proportion of missing data. These are one of the time preference variables (36 % missing data), and the support scale (70 % of scale totals missing, 43 % individual items missing), both from the Polish data set. On inspection, in both cases, the imputed data is similar but not identical to the complete case data.
Fig. 2

Social-support scale for Poland, 70 % totals and 43 % individual items missing

Fig. 3

Individual item, time preference variable 2, for Poland, 36 % missing

Social-support scale for Poland, 70 % totals and 43 % individual items missing Individual item, time preference variable 2, for Poland, 36 % missing For those variables which were entered into the regression model in five or more countries, the regression results are illustrated as odds ratios with 95 % confidence intervals in Fig. 4. Differences in the significance of results are observed between data analysed using MI and CC for age, barriers and personal control in Austria, barriers and self-efficacy in England, barriers and employment in Poland and age in Wales. The majority of differences (except barriers in Austria and Poland) are attributable to narrower confidence intervals in the MI analysis, thus illustrating the higher power and efficiency which the MI approach offers. Whilst differences in the standard errors alter the significance of the results, there are no substantial differences in the point estimates of the β-coefficients.
Fig. 4

Forest plots illustrating odds ratios from the logistic regression

Forest plots illustrating odds ratios from the logistic regression Table 3 presents the proportional reduction in standard error for MI compared to CC analysis, summarised for all variables entered in the country level regression analyses. From the table, it can be seen that on average, standard error is reduced by 39 % when an MI approach is adopted over CC analysis. Standard errors are smaller for MI than CC analysis for all variables other than illness coherence in Belgium (where there was no change in standard error between MI and CC). To ensure that reduction in standard error was not biased by variable selection method, reduction was also compared for variables selected using CC and MI approaches. For variables selected using the CC method, mean standard error reduction was 45 % (95 % CI: 12, 78 %; range 15–99.9 %). For variables selected using the MI method, mean standard error reduction was 34 % (95 % CI: 11, 58 %; range 2–57 %).
Table 3

Summary of proportional decrease in standard error, between complete case and multiple imputation analyses

Mean (%)Min (%)Max (%)Median (%)Standard deviation (%)
Overall3901003819
Austria382255388
Belgium501345
England58331004527
Germany211226225
Greece504358504
Hungary231427234
Netherlands292436294
Poland4114594212
Wales362847376
Summary of proportional decrease in standard error, between complete case and multiple imputation analyses For the univariate variable selection, disparity between which variables were selected using either the MI or CC approach is summarised in Table 4. Chi squared tests indicate that the disparity was significant, (χ2 = 250, p < 0.001), with agreement (sum of the main diagonal) achieved for only 92.5 % of variables. Lower agreement is observed in the variables with more missing data: at <20 % missing data agreement was 94 %, compared to 88 % when missing data was > 20%.
Table 4

Disparity in variable selection between CC and MI, over 42 variables in 9 countries

Complete case method
Included n (%)Excluded n (%)Total
Multiple imputationIncluded n (%)86 (23)3 (1)89
Excluded n (%)25 (7)259 (69)284
Total111262373

χ2= 250, p < 0.001

Disparity in variable selection between CC and MI, over 42 variables in 9 countries χ2= 250, p < 0.001 Figures 5, 6 and 7 show the results of the simulation for the three scenarios. In the base case Fig. 5, significant downward bias is seen for the mean of practitioner satisfaction, for methods 1, 2 and 5, with methods 5 and 6 showing significant bias on the slope. In terms of coverage, there are no significant differences between methods. Empirical standard error also shows little variability between methods, except that it is lower for method 5 on slope for practitioner satisfaction. This reflects the downward bias.
Fig. 5

Simulation results for the three scenarios. Brackets indicate confidence intervals. Base case: 35 % had all items missing for a scale; 8 % had one or two items missing

Fig. 6

Simulation results for the three scenarios. Brackets indicate confidence intervals. More incomplete observations with partial data: 18 % had all items missing for a scale; 25 % had one or two items missing

Fig. 7

Simulation results for the three scenarios. Brackets indicate confidence intervals. Fewer observations with complete data: 55 % had all items missing for a scale; 15 % had just one or two missing

Simulation results for the three scenarios. Brackets indicate confidence intervals. Base case: 35 % had all items missing for a scale; 8 % had one or two items missing Simulation results for the three scenarios. Brackets indicate confidence intervals. More incomplete observations with partial data: 18 % had all items missing for a scale; 25 % had one or two items missing Simulation results for the three scenarios. Brackets indicate confidence intervals. Fewer observations with complete data: 55 % had all items missing for a scale; 15 % had just one or two missing Increasing the number of incomplete observations with partial data, as in scenario 2 (Fig. 6) or increasing the number of incomplete observations (Fig. 7) indicate a similar story. Methods 1 and 2 show an increase in bias compared to the base case, with method 4 showing significant downward bias and reduced coverage for TPB slope in both scenarios. In both scenarios the empirical standard error appears lower than for the base case, reflecting downward bias. Overall, method 6 is seen to be the best, broadly exhibiting the least bias and the most efficiency, and we regard it as a benchmark. This method is not always feasible however, for example in the case study described above. Method 1 often displays a large amount of bias, and like method 3, is inefficient and wasteful of observed data. Method 2 indicates bias in all bar the base case, and may artificially reduce variability due to being effectively single imputation. It appears therefore from the simulations and assumptions that in terms of bias, coverage and empirical standard error that method 4 or 5 would be best in cases where method 6 is not feasible. At this point it is unclear which of the two methods is most appropriate, method 4, similar to method 2 is akin to a single imputation, and for method 5 whilst the assumptions seem more appropriate, the simulation evidence suggests it can introduce bias when these are violated.

Discussion

Our proposed method for handling multiple multi-item scales allows imputation of individual items by using scale totals within the imputation models, such that given primary outcome p, scale T and demographics d1 … dn, item s1 from scale S, is imputed using p, d1 … dn, s2 … sn and summary score t. The use of summed scale scores within the predictor equations reduces both the number of predictors in each equation and the sparsity in the data set. This approach facilitates the efficient use of MI in large survey data sets with multiple multi-item scales. Using MICE allows preservation of the structure of the data, in terms of point estimates and variance or variables, and covariance. Should the approach presented here still result in overly complex prediction equations, a further simplification would be to replace s2 … sn in the prediction equation by their sum or average. For subscales of the health psychology measures, rather than to impute every individual item, one simplification of our method would be to impute only the totals of the subscales. For our data this would reduce the size of the model from 134 to 56 predictors per variable. A disadvantage of this approach, however, is that it would restrict analysis to summed scales, leaving no scope for exploring individual items. Forming scale totals prior to imputation, and then imputing missing totals is a further simplification, but comes with an additional disadvantage: for those respondents who have completed some, but not all, of the items in a subscale, those responses are discarded, or imputed by an ad-hoc method such as using the mean of observed items. Taking as an example the 17-item practitioner satisfaction scale in the Austrian data set, 262 respondents (from 323) completed all items. The response rate to individual items ranged from 278 to 292 responses, and the above approach would discard a total of 437 responses, collected from 31 respondents, from this scale. Our simulation study compares these alternatives with a benchmark ‘complete’ MI analysis, and complete case alternatives. Results from the simulation indicate that simplification by averaging incomplete scales and our proposed method perform comparably, with the simplification reducing model complexity, compromised with a slight loss of efficiency. Complete case methods were seen to perform poorly, with an increase in bias, particularly when the amount of missingness was increased. This result is consistent with a previous study on multi-item imputation, where mean imputation and single imputation were shown to have larger bias and worse coverage than item level multiple imputation, and complete case analysis was shown to overestimate standard error and reduce power [16]. Certain limitations are acknowledged. Typically, analysis with MI relies on an assumption of data being MAR. This assumption cannot be proven, but for large well-conducted surveys, the assumption of MAR is often considered a reasonable starting point for statistical analysis. Rubin et al. conclude that whilst assuming MAR may be inadequate if there are high levels of missingness or insufficient relevant covariates, “the evidence seems to be that at least in some carefully designed survey contexts, modelling observed data under MAR can provide acceptable answers” [27]. A further limitation of the proposed method for handling multiple multi-item scales is the assumption that items from different scales are only correlated via the scale totals. The majority of scales and measures used in this survey are validated, and were tested during analysis for internal consistency. Additionally, as the survey was structured such that each measure was presented in its entirety, independently, this a plausible assumption for our data. During data analysis, significant univariate differences were observed between CC and MI which indicate that performing variable selection on different data sets would result in the entry of different variables into the regression model. In comparison with CC, using the same variables for both approaches, regression results from MI show standard error to be reduced by an average of 39 %, with no cases where standard error increases, thus resulting in more precise conclusions. Data collection will almost always result in missing data. It is the duty of researchers and analysts to firstly minimise the extent of missing data by ensuring appropriate methods for enhancing data capture are implemented, but also to handle missingness in a way best suited to the data and research question. Poor handling and reporting of missing data may result in misleading conclusions and are one of the main reasons for publication rejections [28, 29]. With the advent of MI routines in SPSS, R, SAS and Stata, MI is now readily accessible to analysts as a robust method for handling missing data, which can be applied in a number of contexts [4]. Alongside our proposed adaptation for imputing multiple multi-item scales, ordinal regression and conditional imputation are also powerful tools which, in this study, have allowed us to address the challenges presented by large scale survey data. Our proposed adaptation makes MI practical for large scale survey data, where a full prediction model may be infeasible, and we have shown that the use of MI in this way makes better use of available data and can yield substantively different results from simpler techniques.
  18 in total

Review 1.  The use of multiple imputation for the analysis of missing data.

Authors:  S Sinharay; H S Stern; D Russell
Journal:  Psychol Methods       Date:  2001-12

2.  Missing data in trial-based cost-effectiveness analysis: the current state of play.

Authors:  Sian Marie Noble; William Hollingworth; Kate Tilling
Journal:  Health Econ       Date:  2010-12-15       Impact factor: 3.046

3.  How should variable selection be performed with multiply imputed data?

Authors:  Angela M Wood; Ian R White; Patrick Royston
Journal:  Stat Med       Date:  2008-07-30       Impact factor: 2.373

Review 4.  Missing data: a systematic review of how they are reported and handled.

Authors:  Iris Eekhout; R Michiel de Boer; Jos W R Twisk; Henrica C W de Vet; Martijn W Heymans
Journal:  Epidemiology       Date:  2012-09       Impact factor: 4.822

5.  Common statistical and research design problems in manuscripts submitted to high-impact medical journals.

Authors:  Sara Fernandes-Taylor; Jenny K Hyun; Rachelle N Reeder; Alex Hs Harris
Journal:  BMC Res Notes       Date:  2011-08-19

6.  Predictors of self-reported adherence to antihypertensive medicines: a multinational, cross-sectional survey.

Authors:  Valerie L Morrison; Emily A F Holmes; Sahdia Parveen; Catrin O Plumpton; Wendy Clyne; Sabina De Geest; Fabienne Dobbels; Bernard Vrijens; Przemyslaw Kardas; Dyfrig A Hughes
Journal:  Value Health       Date:  2015-03       Impact factor: 5.725

7.  Multiple imputation for missing data in epidemiological and clinical research: potential and pitfalls.

Authors:  Jonathan A C Sterne; Ian R White; John B Carlin; Michael Spratt; Patrick Royston; Michael G Kenward; Angela M Wood; James R Carpenter
Journal:  BMJ       Date:  2009-06-29

8.  Multiple imputation to deal with missing EQ-5D-3L data: Should we impute individual domains or the actual index?

Authors:  Claire L Simons; Oliver Rivero-Arias; Ly-Mee Yu; Judit Simon
Journal:  Qual Life Res       Date:  2014-12-04       Impact factor: 4.147

Review 9.  A review of the reporting and handling of missing data in cohort studies with repeated assessment of exposure measures.

Authors:  Amalia Karahalios; Laura Baglietto; John B Carlin; Dallas R English; Julie A Simpson
Journal:  BMC Med Res Methodol       Date:  2012-07-11       Impact factor: 4.615

10.  Avoiding bias due to perfect prediction in multiple imputation of incomplete categorical variables.

Authors:  Ian R White; Rhian Daniel; Patrick Royston
Journal:  Comput Stat Data Anal       Date:  2010-10-01       Impact factor: 1.681

View more
  11 in total

1.  Stigma and Acceptance of Sierra Leone's Child Soldiers: A Prospective Longitudinal Study of Adult Mental Health and Social Functioning.

Authors:  Theresa S Betancourt; Dana L Thomson; Robert T Brennan; Cara M Antonaccio; Stephen E Gilman; Tyler J VanderWeele
Journal:  J Am Acad Child Adolesc Psychiatry       Date:  2019-06-07       Impact factor: 8.829

2.  Family-Based Mental Health Promotion for Somali Bantu and Bhutanese Refugees: Feasibility and Acceptability Trial.

Authors:  Theresa S Betancourt; Jenna M Berent; Jordan Freeman; Rochelle L Frounfelker; Robert T Brennan; Saida Abdi; Ali Maalim; Abdirahman Abdi; Tej Mishra; Bhuwan Gautam; John W Creswell; William R Beardslee
Journal:  J Adolesc Health       Date:  2019-11-05       Impact factor: 5.012

3.  Social Cognition, Language, and Social Behavior in 7-Year-Old Children at Familial High-Risk of Developing Schizophrenia or Bipolar Disorder: The Danish High Risk and Resilience Study VIA 7-A Population-Based Cohort Study.

Authors:  Camilla Jerlang Christiani; Jens R M Jepsen; Anne Thorup; Nicoline Hemager; Ditte Ellersgaard; Katrine S Spang; Birgitte K Burton; Maja Gregersen; Anne Søndergaard; Aja N Greve; Ditte L Gantriis; Gry Poulsen; Md Jamal Uddin; Larry J Seidman; Ole Mors; Kerstin J Plessen; Merete Nordentoft
Journal:  Schizophr Bull       Date:  2019-10-24       Impact factor: 9.306

4.  Correlates of Social Support and its Association With Physical Activity Among Young Adolescents.

Authors:  Monique Gill; Alec M Chan-Golston; Lindsay N Rice; Sarah E Roth; Catherine M Crespi; Brian L Cole; Deborah Koniak-Griffin; Michael L Prelip
Journal:  Health Educ Behav       Date:  2017-08-08

5.  Prospective associations between diet quality and body mass index in disadvantaged women: the Resilience for Eating and Activity Despite Inequality (READI) study.

Authors:  Dana Lee Olstad; Karen E Lamb; Lukar E Thornton; Sarah A McNaughton; David A Crawford; Leia M Minaker; Kylie Ball
Journal:  Int J Epidemiol       Date:  2017-10-01       Impact factor: 7.196

6.  Handling missing data in a composite outcome with partially observed components: simulation study based on clustered paediatric routine data.

Authors:  Susan Gachau; Edmund Njeru Njagi; Nelson Owuor; Paul Mwaniki; Matteo Quartagno; Rachel Sarguta; Mike English; Philip Ayieko
Journal:  J Appl Stat       Date:  2021-03-17       Impact factor: 1.416

7.  Handling missing items in the Hospital Anxiety and Depression Scale (HADS): a simulation study.

Authors:  Melanie L Bell; Diane L Fairclough; Mallorie H Fiero; Phyllis N Butow
Journal:  BMC Res Notes       Date:  2016-10-22

8.  The impact of COVID-19 on the wellbeing of the UK nursing and midwifery workforce during the first pandemic wave: A longitudinal survey study.

Authors:  Keith Couper; Trevor Murrells; Julie Sanders; Janet E Anderson; Holly Blake; Daniel Kelly; Bridie Kent; Jill Maben; Anne Marie Rafferty; Rachel M Taylor; Ruth Harris
Journal:  Int J Nurs Stud       Date:  2021-12-15       Impact factor: 5.837

9.  Missing-data analysis: socio- demographic, clinical and lifestyle determinants of low response rate on self- reported psychological and nutrition related multi- item instruments in the context of the ATTICA epidemiological study.

Authors:  Thomas Tsiampalis; Demosthenes B Panagiotakos
Journal:  BMC Med Res Methodol       Date:  2020-06-08       Impact factor: 4.615

10.  The effects of a dialogue-based intervention to promote psychosocial well-being after stroke: a randomized controlled trial.

Authors:  Line Kildal Bragstad; Ellen Gabrielsen Hjelle; Manuela Zucknick; Unni Sveen; Bente Thommessen; Berit Arnesveen Bronken; Randi Martinsen; Gabriele Kitzmüller; Margrete Mangset; Kari Johanne Kvigne; Katerina Hilari; C Elizabeth Lightbody; Marit Kirkevold
Journal:  Clin Rehabil       Date:  2020-06-10       Impact factor: 3.477

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.