| Literature DB >> 33174079 |
Rob Arbuckle1, Patricia Halstead2, Chris Marshall3, Brenda Zimmerman2, Kate Bolton3, Antoine Regnault4, Cathy Gelotte2.
Abstract
OBJECTIVES: Given the lack of validated patient-reported outcomes (PRO) instruments assessing cold symptoms, a new pediatric PRO instrument was developed to assess multiple cold symptoms: the Child Cold Symptom Questionnaire (CCSQ). The objective of this research was to evaluate the measurement properties of the CCSQ.Entities:
Year: 2020 PMID: 33174079 PMCID: PMC7794207 DOI: 10.1007/s40271-020-00462-3
Source DB: PubMed Journal: Patient ISSN: 1178-1653 Impact factor: 3.883
Fig. 1An overview of the study
Fig. 2Example items showing response scales and symptom illustrations
© Johnson & Johnson
Description and criteria for psychometric validation analyses performed
| Property | Description/definition | Criteria for consideration |
|---|---|---|
| Quality of completion | Evaluation of frequency and percentage of missing items per child, frequency and percentage of missing data per item, number of children with at least one missing item, number of missing questionnaires for each planned assessment | Items with high levels of missing data were considered for deletion |
| Item response distributions | Examined in the total sample and within age subgroups (6–8 and 9–11 years) to evaluate if any items exhibited a skewed distribution, floor/ceiling effects, or bimodal distribution or if any particular response options were overly favored. Floor effects refer to a high percentage of children with the lowest (best) possible score for an item or score, and ceiling effects refer to a high percentage of children with the highest (worst) possible score for an item or score | Items with evidence of problematic distributions were flagged and considered for deletion |
| Confirmatory factor analysis (CFA) | Performed to evaluate the a priori hypothesis for aggregating the items into multi-item domains by calculating a score for each type of cold symptom separately during the overnight, morning, day, and evening timeframes | The quality of the CFA models was assessed according to the following goodness-of-fit indices: [ Root mean square error of approximation (RMSEA): good fit if RMSEA < 0.05, acceptable fit if RMSEA < 0.08 Root mean square residual (RMR) and standardized RMR: good fit if RMR < 0.05 Goodness of fit index (GFI) and adjusted GFI (AGFI): good fit if GFI or AGFI > 0.90 Normed fixed index (NFI) and comparative fit index (CFI): good fit if NFI or CFI > 0.90 In addition, the Akaike Information Criteria (AIC) and Bayesian Information Criteria (BIC) of the models underpinned by hypotheses A and B were computed and compared. AIC and BIC are indices of fit that can be compared between non-nested models. The strength of the relation of the items to the unobserved variables that they are assumed to measure was evaluated by the standardized loadings. Potential improvement in the model fit was observed using modification indices, which indicated whether or not there were better item grouping |
| Score distributions | Score distributions of the resulting multi-item scores and single-item responses were described in the cold eligible sample and by age category (6–8 years; 9–11 years) | Percentages of participants scoring at the lowest and highest possible values of scores and response scales were evaluated to detect potential floor or ceiling effects |
| Test–retest reliability | Evaluated to establish the ability of the instrument to give reproducible results when administered twice, over a given time period, in a sample with stable health [ Evaluated between day 1 and 2 for evening assessments and day 2 and 3 for the morning assessments to maximize the likelihood of capturing an adequate sample of patients with stable colds. Assessed by calculating intra-class correlation coefficients (ICCs) in children whose cold severity was unchanged according to the Parent Global Impression of Change (PGI-C) | This analysis was considered exploratory given that cold symptoms may not be stable, even when comparing 2 consecutive days. Therefore, while the normal threshold of ICC > 0.70 was targeted, it was accepted that it may not be achieved |
| Convergent validity | Involved examining correlations between the scores of the instrument under study and those of a validated instrument assessing related constructs. Correlations of the Child Cold Symptom Questionnaire (CCSQ) single- and multi-item scores with the Strep-PRO scores [ | Evidence of convergent or concurrent validity was considered to have been demonstrated if there was a logical pattern of correlation among scores, with scores measuring similar or related symptoms correlating more highly than scores measuring unrelated symptoms. In particular, the Strep-PRO item scores for “sore throat” and “headache” were expected to correlate most highly (and with correlation coefficients > 0.6) with CCSQ scores assessing corresponding symptoms. The Strep-PRO item score for “pain swallowing” was expected to correlate most highly with CCSQ sore throat |
| Known groups validity | Known groups validity involved comparing scores among groups that would be expected to differ on the construct of interest [ CGI-S responses for that day Parent Global Impression of Severity (PGI-S) responses for that day Children with and without current cold | Statistically significant differences for analysis of variance (ANOVA) or |
| Ability to detect change over time | The ability of scores of the CCSQ to detect changes over time in individuals who had changed with respect to the symptom measured [ Changes in the CCSQ scores from day 1 to day 2 and from day 1 to day 7 were described and compared among children considered “improved,” having “no change,” and “worsened,” as determined using the ratings on the PGI-C, CGI-S, and PGI-S | Change scores were compared among the change groups using ANOVA. A paired Effect size (ES), standardized response mean, and Guyatt’s statistic were calculated to evaluate the magnitude of changes in CCSQ scores over time. Based on Cohen’s guidance for interpretation of ES [ ES around 0.20: small change ES around 0.50: moderate change ES around 0.80: large change Statistically significant changes over time and at least moderate ESs were considered evidence of responsiveness |
Fig. 3Definition of multi-item scores without nasal dimensions showing standardized factor loadings (cold eligible sample, N = 138). *Standardized factor loadings
Final scoring structure of the single-item and multi-item scores
| Composite multi-item scores | Single-item scores | Morning item | Evening item |
|---|---|---|---|
| Items retained | |||
| Nasal | Runny nose | M03. Runny nose (from when you woke up) | E02. Runny nose (this evening) |
| Stuffy nose | M09. Stuffy nose (right now) | E03. Stuffy nose (right now) | |
| Clear nose | M10. Clear nose (right now) | E04. Clear nose (right now) | |
| Cough | M04. Cough bad (this morning) | E01. Cough bad (this evening) | |
| Aches and pain | Sore throat | M13. Sore throat (right now) | E07. Sore throat (right now) |
| Headache | M14. Head hurt (right now) | E08. Head hurt (right now) | |
| Day nasal | Day wipe or blow | E10. Wipe or blow nose (for all of today) | |
| Day stuffy nose | E11. Stuffy nose (for all of today) | ||
| Day cough | E12. Cough amount (for all of today) | ||
Test–retest reliability between the day 1 and day 2 evening scores and convergent validity Pearson correlations between the Child Cold Symptom Questionnaire (CCSQ) and the Strep-PRO scores on day 1
| Type of score | Symptom | Test–retest reliability of stable evening scores (days 1 and 2) based on the PGI-C | Convergent validity Pearson correlations between the day 1 evening scores and the Strep-PRO day 1 scores ( | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Mean change (SD) | ICC | CCC | Pearson correlation | Throat hurt | Head hurt | Hurt to swallow | Fever or feel warm | Food eating | Playing | Feeling tired | Total score | |||
| Single-item scores | Runny nose | − 0.3 (1.0) | 0.007 | 0.64 | 0.63 | 0.66 | 0.34* | 0.24* | 0.25* | 0.26* | 0.24* | 0.23* | 0.18* | 0.34* |
| Cough | − 0.1 (1.0) | 0.285 | 0.73 | 0.74 | 0.44* | 0.48* | 0.33* | 0.41* | 0.37* | 0.42* | ||||
| Stuffy nose | − 0.1 (1.0) | 0.259 | 0.71 | 0.72 | 0.21* | 0.30* | 0.17* | 0.19* | 0.12 | 0.16 | 0.18* | 0.27* | ||
| Clear nose | 0.0 (1.2) | 0.934 | 0.63 | 0.63 | 0.63 | 0.16 | 0.28* | 0.18* | 0.25* | 0.28* | 0.28* | 0.06 | 0.29* | |
| Sore throat | − 0.2 (0.9) | 0.043 | 0.76 | 0.77 | 0.35* | 0.37* | 0.46* | 0.35* | 0.33* | |||||
| Headache | − 0.1 (0.9) | 0.294 | 0.80 | 0.80 | 0.39* | 0.42* | 0.20* | 0.36* | 0.32* | 0.41* | ||||
| Day wipe or blow | − 0.1 (1.1) | 0.239 | 0.64 | 0.64 | 0.65 | 0.29* | 0.18* | 0.25* | 0.22* | 0.23* | 0.22* | 0.15 | 0.31* | |
| Day stuffy nose | − 0.1 (1.1) | 0.586 | 0.68 | 0.68 | 0.68 | 0.24* | 0.30* | 0.25* | 0.27* | 0.15 | 0.20* | 0.30* | 0.34* | |
| Day cough | − 0.2 (1.0) | 0.044 | 0.73 | 0.73 | 0.54* | 0.35* | 0.45* | 0.29* | 0.34* | 0.30* | 0.41* | 0.54* | ||
| Composite multi-item scores | Nasal | − 0.4 (2.3) | 0.088 | 0.71 | 0.72 | 0.32* | 0.38* | 0.27* | 0.32* | 0.29* | 0.31* | 0.19* | 0.41* | |
| Aches and pain | − 0.3 (1.4) | 0.033 | 0.83 | 0.84 | 0.35* | 0.49* | 0.40* | 0.45* | ||||||
| Day nasal | − 0.2 (1.6) | 0.240 | 0.70 | 0.71 | 0.33* | 0.30* | 0.32* | 0.31* | 0.24* | 0.26* | 0.28* | 0.41* | ||
ICC > 0.70 (thus surpassing the a priori threshold) are highlighted in bold
ICC intra-class correlation coefficient, PGI-C Parent Global Impression of Change
*Convergent validity Pearson correlations that were statistically significant at the P < 0.05 level; the highest convergent validity correlations (≥ 0.60) are highlighted in bold
Fig. 4Known groups validity: ANOVA comparison of CCSQ evening scores according to CGI-S defined groups at day 1 (N = 138). Note, “nasal,” “aches and pain,” and “day nasal” are multi-item scores (made up of 3, 2, and 2 items, respectively) and therefore have possible score ranges of 0–12, 0–8, and 0–8, respectively, rather than 0–4 as for the other items. ANOVA analysis of variance, CCSQ Child Cold Symptom Questionnaire, CGI-S Child Global Impression of Severity, SEM standard error of the mean. *ANOVA showed statistically significant differences at the P < 0.05 level. **ANOVA showed statistically significant differences at the P < 0.01 level. ***ANOVA showed statistically significant differences at the P < 0.001 level
Fig. 5Change over time: ANOVA comparison of changes in CCSQ evening scores between day 1 and day 2 for change groups defined by CGI-S changes between day 1 and day 2 (N = 138). ANOVA analysis of variance, CCSQ Child Cold Symptom Questionnaire, CGI-S Child Global Impression of Severity, SEM standard error of the mean. Note, “nasal,” “aches and pain,” and “day nasal” are multi-item scores (made up of 3, 2, and 2 items, respectively) and therefore have possible change score ranges of 0–12, 0–8, and 0–8, respectively, rather than 0–4 as for the other items. *ANOVA showed statistically significant differences among change groups at the P < 0.05 level. **ANOVA showed statistically significant differences among change groups at the P < 0.01 level. ***ANOVA showed statistically significant differences among groups at the P < 0.001 level
| The psychometric validation of a child self-report measure of common cold symptoms in children aged 6–11 years is described. |
| The single-item and multi-item scores are valid and reliable in children aged 6–11 years. |
| The measure is appropriate for assessing cold symptoms in clinical trials. |