Literature DB >> 34936490

Reliable and Valid Survey-Based Measures to Assess Quality of Care in Home-Based Serious Illness Programs.

Rebecca Anhang Price1, Melissa A Bradley1, Feifei Ye2, Danielle Schlang3, Maria DeYoreo4, Paul D Cleary5, Marc N Elliott4, Cheryl K Montemayor4, Martha Timmer4, Anagha Tolpadi4, Joan M Teno6.   

Abstract

Background: There is a pressing need for standardized measures to assess the quality of home-based serious illness care. Currently, there are no validated quality measures that are specific to home-based serious illness programs (SIPs) and the unique needs of their patients. Objective: To develop and evaluate standardized survey-based measures of serious illness care experiences for assessing and comparing quality of home-based serious illness care programs.
Methods: From October 2019 through January 2020, we administered a survey to patients who received care from 32 home-based SIPs across the United States. Using the 2263 survey responses, we assessed item performance and constructed composite measures via factor analysis, evaluated item-scale correlations, estimated reliability, and examined validity by regressing overall ratings and willingness to recommend care on each composite.
Results: The overall survey response rate was 36%. Confirmatory factor analyses supported five composite quality measures: Communication, Care Coordination, Help for Symptoms, Planning for Care, and Support for Family and Friends. Cronbach's alpha estimates for the composite measures ranged from 0.69 to 0.85, indicating adequate internal consistency in assessing their underlying constructs. Interprogram reliability ranged from 0.67 to 0.80 at 100 completed surveys per measure, meeting common standards for distinguishing between programs' performance. Together, the composites explained 45% of the variance in patients' overall care ratings. Communication, Care Coordination, and Planning for Care were the strongest predictors of overall ratings.
Conclusion: Our analyses provide evidence of the feasibility, reliability, and validity of proposed survey-based measures to assess the quality of home-based serious illness care from the perspective of patients and their families.

Entities:  

Keywords:  patient and family care experiences; patient survey; quality measurement; serious illness care

Mesh:

Year:  2021        PMID: 34936490      PMCID: PMC9145570          DOI: 10.1089/jpm.2021.0424

Source DB:  PubMed          Journal:  J Palliat Med        ISSN: 1557-7740            Impact factor:   2.947


Introduction

In recent years, there has been rapid growth of community-based programs that provide care for seriously ill individuals in their homes.[1-4] These serious illness programs (SIPs) are expanding at a time when both the public and private sectors are adopting more value-based payment programs for care. Value-based programs use incentives to promote the quality and efficiency of care. The use of value-based programs has particularly important implications for the seriously ill population, which is at high risk for under-treatment motivated by cost concerns. To that end, the Centers for Medicare & Medicaid Services has introduced a number of initiatives that test alternative models for care of high need, high cost, seriously ill populations,[5-7] and is considering others.[8] Quality measures are critical in such models, complementing assessments focused on utilization and cost.[9] However, to date, no standardized measures have been developed to specifically assess and monitor quality of care provided by SIPs. Measures of the degree to which care is patient- and family-centered are particularly important for seriously ill patients because of great variability across patients in both preferences for care intensity and tradeoffs between quality and length of life. Surveys of patients and their family caregivers are the main means of assessing patient- and family-centeredness of care, and survey results can be used to identify areas of patient and family experiences of care that need improvement,[10-12] monitor quality over time, and be incorporated into value-based models to allow for benchmarking and comparison of programs. Experts have highlighted the need for standardized measures of patient and family care in SIPs.[13] To address this need, we developed a survey of care experiences of the seriously ill. We field-tested the survey in a sample of patients receiving care from SIPs in late 2019 and early 2020, immediately before the onset of the COVID-19 pandemic.

Methods

Survey

To develop the survey, we first conducted a systematic literature review of patient- and family-reported measures of serious illness care. We then conducted interviews with patients, family caregivers, and health care providers from a diverse set of SIPs nationwide, and sought input from experts in serious illness care and survey research methods. We conducted cognitive interviews with patients and family caregivers to test draft questions and questionnaires. Guided by information from these activities and a conceptual framework of core aspects of high-quality serious illness care (Supplementary Appendix Table SA1),[2] we developed a 56-item field test version of the Serious Illness Survey. It included 28 evaluative questions about communication, emotional and spiritual support, access and responsiveness, shared decision making and advance care planning in support of patient goals, symptom management/palliation, care continuity and coordination, attention to social determinants of health (via referrals and connection to resources), attention to caregiver needs, and medication management, and two global assessments of care (overall rating of care from the program and willingness to recommend the program to friends and family). Supplementary Appendix Table SA2 lists the field test survey questions in each of these domains. In addition, the survey included questions about patients' health status, functional status, demographic characteristics, recent visits and calls from the program, and whether and why a proxy completed the survey. Wherever possible, questions were derived or adapted from surveys for which there were published assessments of validity and reliability. Final, more concise versions of the survey instrument are available free online.[14]

Field test

Sites

The study was conducted in 32 geographically diverse SIPs that provide home-based care. Programs were recruited from a master list of 319 SIPs developed by the project team, primarily from the Center to Advance Palliative Care (CAPC) National Palliative Care Registry,[15] a report on community-based model programs for the seriously ill,[16] and responses to announcements posted by the National Hospice & Palliative Care Organization and the American Academy of Hospice and Palliative Medicine. To be eligible to participate, all programs needed to provide medical care to seriously ill patients in their homes. Almost all included programs provide after-hours access to care either by phone or in person and have either a physician or a nurse practitioner on the team that makes home visits. Twenty-three of the programs are based out of hospices or home health agencies, 6 out of health systems or health plans, and 3 are part of medical groups. Five of the 32 programs operate in more than one state. The average program size was 203 patients actively in care at a given time (median: 116; range: 7 to 1481).

Sample

Patients within each program were eligible for the survey if they were adults (age 18 or older at the time the sample was selected), received care at a private home or assisted living facility, and had been receiving care from the program for at least 3 months and no more than 24 months at the time the sample was selected. We identified all survey-eligible patients from participating programs, for a total sample of 6456 patients. Sampled patients who indicated on the survey that they had not received visits from the program in the past three months were not considered eligible.

Survey protocol

Within each program, eligible patients were randomly assigned to one of two modes of survey administration, mail-only or mail-telephone. The mail-only mode consisted of a prenotification letter, followed by a mail survey one week later, and an additional mail survey three weeks after that if the survey had not been returned. The mail-telephone mode consisted of a prenotification letter, followed by a mail survey one week later, and up to five calls to complete the survey by phone if the mail survey was not returned after three weeks. All cover letters and introduction scripts were addressed to the patient, but indicated that a family member or friend could assist with or complete the survey for the patient if needed. The survey was available in both English and Spanish; when indicated in the sample file, Spanish was the language for mailing and initial telephone calls. Spanish was also offered as an option by telephone interviewers. We field-tested the survey between October 2019 and January 2020. The study was approved by the RAND Corporation's Human Subjects Protection Committee, which serves as RAND's IRB.

Analyses

Like several other public reporting initiatives,[17,18] we used “top-box” scores to promote ease of understanding by consumers.[19] To calculate top-box scores, we classify the response indicating the best quality as 100 and all other responses as 0 (e.g., “always” = 100; all other responses = 0). For the overall rating item, 9–10 are classified as 100 and 0–8 as 0.[20] Item scores are not calculated for those who respond to a screening question indicating that the item is not relevant to them, or for those who select a nonapplicable response indicating that the item is not relevant to them.

Composites

The project team and technical expert panel reviewed the 28 evaluative questions administered in the field test (Supplementary Appendix SA2) and retained 19 for composite development. We used confirmatory factor analysis (CFA) to evaluate the factor structure of these 19 items and used modification indices to drop one item (about trust) that loaded on more than one factor. The final CFA analysis was performed on 18 evaluative items. We used weighted least squares means and variance adjusted (WLSMV) estimation to account for the dichotomous nature of top-box item scores.[21] We used a criterion of factor loadings ≥0.40 for inclusion within the composite,[22] and assessed overall model fit using the Comparative Fit Index (CFI), the root mean square error of approximation (RMSEA), and weighted root mean square residual (WRMR). Prior research indicates that a model with a good fit typically has a CFI >0.95, RMSEA <0.05, and WRMR <1.0, with WRMR being less critical.[23-25] The model χ[2] statistic and standard error of model estimates were adjusted to account for the clustering of patients within programs.[26,27] For the CFA, we hypothesized five composite measures of quality of serious illness care: Communication, Care Coordination, Help for Symptoms, Planning for Care, and Support for Family and Friends. To assess the degree to which these measures assess distinct content domains, we calculated correlations between the composite scores computed as the average of top-box-scored items, adjusting for clustering within programs. Correlations exceeding 0.80 may indicate that composites are measuring aspects of care that are insufficiently distinct.[22]

Case-mix adjustment

We adjusted for differences in case mix across SIPs,[28] including patient age, education, diagnosis, proxy assistance with survey completion, self-reported ability to get out of bed or house, self-reported physical and mental health, and response percentile (a within-program rank-based measure of the time between survey administration and survey response).[29]

Reliability

To assess reliability of the proposed quality measures, we calculated the interunit (i.e., program-level) reliability of each measure, a 0–1 index of the degree to which measure scores are able to precisely distinguish between the performances of programs. We calculated the program-level reliability for each measure using intraclass correlations (ICCs) of the case-mix- and survey mode-adjusted top-box scores, excluding programs with fewer than 10 respondents. We also calculated predicted program-level reliability with 100 respondents using the Spearman Brown formula.[30] When programs are being compared, measure reliability of 0.70 or greater is commonly considered adequate.[31] We also calculated the internal consistency reliability of the composites using Cronbach's alpha. Cronbach's alpha increases with the number of items included in a composite measure and their average correlation with one another. Larger values indicate more precise measurement of the underlying construct. Cronbach's alphas of 0.70 or higher are considered adequate for group comparisons.[31]

Validity

To assess construct validity, we evaluated the associations of each composite measure's top-box score with the top-box score of the two global measures, Overall Rating of the Program and Willingness to Recommend the Program. We estimated multivariate linear regression models with the global measures as dependent variables to highlight the unique association of each composite with those measures. All models were adjusted for the case-mix variables and mode of survey administration. We fit models that included only one composite at a time as a predictor and a model that included all composites simultaneously as predictors. All models were estimated with the WLSMV in Mplus; [32] to correct for attenuation in regression coefficients with categorical outcomes.[33,34] We calculated the squared semipartial correlation, or the unique r2, associated with each composite, which indicates the proportion of variance in the outcome uniquely associated with each composite. As with the CFA models, standard errors and significance tests of regression coefficients were adjusted for clustering of patients within programs.[26,27] CFA and validity testing were performed in Mplus 8; missing data were handled using full-information likelihood estimation.

Results

The overall response rate to the survey was 36.4% (30.4% in mail only mode, and 42.5% in the mail-telephone mode). The average age of respondents was 79; 75% were non-Hispanic white (Table 1). Forty-four percent received seven or more in-person visits from the program in three months. Fourteen percent reported that they were not able to leave the house, and 58% reported being in fair or poor health. Proxy respondents completed the survey on behalf of the patient for 33% of respondents; an additional 14% reported some other form of proxy assistance. A comparison between characteristics of respondents and nonrespondents is available in Supplementary Appendix Table SA3.
Table 1.

Characteristics of Patients Responding to Serious Illness Survey

CharacteristicRespondents (%)[a], n = 2263
Sex
 Female58.4%
 Male41.6%
Age (mean)78.7
 18–6413.8%
 65–7417.7%
 75–8429.6%
 85–8919.5%
 90 or older19.4%
Race/ethnicity
 Non-Hispanic white75.4%
 Non-Hispanic black or African American10.0%
 Hispanic10.4%
 Other4.2%
Education
 Less than high school21.2%
 High school graduate or GED36.9%
 Some college or two-year degree23.3%
 College degree or more18.6%
Language spoken at home
 English92.7%
 Spanish4.2%
 Other3.1%
Primary payer for care
 Medicare (including FFS and Medicare Advantage)47.9%
 Medicaid7.2%
 Private23.4%
 Other (including uninsured and no payer) or Unknown21.4%
Length of stay in program (through beginning of survey administration)
 6 weeks to 3 months13.3%
 4 to <6 months19.3%
 6 to <12 months34.0%
 12 to 24 months33.5%
Residential setting
 Home74.3%
 Assisted living facility5.1%
 Unknown20.6%
Primary diagnosis
 Cancer13.6%
 Alzheimer's or other dementia9.4%
 All other77.1%
No. of in-person visits in 3 months
 1 to 2 times12.7%
 3 to 4 times11.4%
 5 to 6 times10.4%
 7 or more times43.9%
 Unknown21.7%
Proxy assistance with survey response
 Proxy completed survey for patient33.3%
 Proxy assisted in some other way14.0%
 No proxy assistance52.8%
Self-reported functional status
 Able to leave house85.9%
 Able to get out of bed but not house7.4%
 Not able to get out of bed6.7%
Self-reported physical health status
 Excellent2.3%
 Very good10.0%
 Good29.5%
 Fair39.0%
 Poor19.2%
Self-reported mental health status
 Excellent11.2%
 Very good22.4%
 Good33.4%
 Fair24.3%
 Poor8.7%

Means and percentages were calculated among nonmissing values, except where large unknown categories are noted (i.e., payer for care, residential setting, and No. of in-person visits).

FFS, fee-for-service.

Characteristics of Patients Responding to Serious Illness Survey Means and percentages were calculated among nonmissing values, except where large unknown categories are noted (i.e., payer for care, residential setting, and No. of in-person visits). FFS, fee-for-service. The five-factor CFA model provides an excellent fit to the data, χ2(125) = 269.45; CFI = 0.992; RMSEA = 0.023; and WRMR = 1.463. Table 2 displays the factor loadings and corrected item-total correlation for the 18 evaluative items proposed for the 5 composite measures, along with Cronbach's alpha internal consistency for each composite measure. The factor loadings range between 0.71 to 0.98 and corrected item-total correlations range between 0.44 and 0.69, suggesting these items are strong indicators of the corresponding factor.
Table 2.

Psychometric Properties of Proposed Serious Illness Care Quality Measures and Component Items

Composite and global measures and component survey itemsResponse options[a]Factor loadingCorrected item-total correlationAdjusted mean program-level top-box score[b,c]
Communication (Cronbach's alpha = 0.85)     78.5%
In the last three months, how often did people from this program spend enough time with you when they visited?Never/Sometimes/Usually/Always0.840.6175.8%
In the last three months, how often did people from this program explain things to you in a way you could understand?Never/Sometimes/Usually/Always0.860.6477.4%
In the last three months, how often did people from this program listen carefully to you?Never/Sometimes/Usually/Always0.930.6983.7%
In the last three months, how often did you feel that people from this program cared about you as a whole person?Never/Sometimes/Usually/Always0.910.6881.6%
In the last three months, how often did you feel heard and understood by people from this program?Never/Sometimes/Usually/Always0.970.6673.9%
Care Coordination (Cronbach's alpha = 0.74)     68.1%
In the last three months, how often did people from this program seem to know the important information about your medical history?Never/Sometimes/Usually/Always0.780.5169.4%
In the last three months, did someone from this program talk with you about the care or treatment you get from your other doctors or health care providers?Yes, definitely/Yes, somewhat/No0.710.5361.2%
In the last three months, did someone from this program talk with you about all the medicines you are taking?Yes, definitely/Yes, somewhat/No/I do not take any medicines0.760.5478.0%
Everyday activities include things like getting ready in the morning, getting meals, or going places in your community.In the last three months, did someone from this program talk with you about how to get help with everyday activities?Yes, definitely/Yes, somewhat/No/I did not want to talk with this program about getting help with everyday activities0.740.4448.2%
In the last three months, when you contacted this program between visits, did you get the help you needed?[d]Yes, definitely/Yes, somewhat/No0.800.5083.7%
Help for Symptoms (Cronbach's alpha = 0.69)     60.0%
In the last three months, did you get as much help as you wanted for your pain?[d]Yes, definitely/Yes, somewhat/No/I did not want help for my pain0.830.5360.4%
In the last three months, did you get as much help as you wanted for your breathing?[d]Yes, definitely/Yes, somewhat/No/I did not want help for my breathing0.780.4866.6%
In the last three months, did you get as much help as you wanted for your feelings of anxiety or sadness?[d]Yes, definitely/Yes, somewhat/No/I did not want help for my anxiety or sadness0.830.5253.1%
Planning for Care (Cronbach's alpha = 0.73)     55.5%
Did someone from this program ever talk with you about what you should do during a health emergency?Yes, definitely/Yes, somewhat/No0.850.5365.3%
Did someone from this program ever talk with you about what is important in your life?Yes, definitely/Yes, somewhat/No0.850.5651.3%
Did someone from this program ever talk with you about what your health care options would be if you got sicker?Yes, definitely/Yes, somewhat/No0.810.5650.0%
Support for Family and Friends (Cronbach's alpha = 0.70)     70.8%
In the last three months, did the people from the program involve your family members or friends in discussions about your health care as much as you wanted?[d]Yes, definitely/Yes, somewhat/No0.820.5474.1%
In the last three months, did your family members or friends get as much emotional support as they wanted from this program?[d]Yes, definitely/Yes, somewhat/No/My family members or friends did not want emotional support from this program0.980.5467.4%
Global Measure: Overall Rating of the Program     
Using any number from 0 to 10, where 0 is the worst care possible and 10 is the best care possible, what number would you use to rate your care from this program?0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10N/AN/A75.3%
Global Measure: Willingness to Recommend the Program     
Would you recommend this program to your friends and family?Definitely Yes/Probably Yes/Probably No/Definitely NoN/AN/A73.7%

Top-box responses are noted in bold.

Program level scores are adjusted for case-mix (response percentile, education, age, diagnosis, proxy use, self-reported functional status, and physical and mental health) and mode of survey administration. Adjusted program-level scores are calculated for each item assuming each program had population-average case mix and mode of survey administration. Adjusted program-level composite scores are then generated as the average of the adjusted program-level item scores for the items that compose the composite measure.

Distributions of adjusted program-level scores are calculated restricting to only those programs with 10 or more respondents (28 out of 32 programs).

A screening question(s) determines whether this evaluative survey question is applicable to the respondent.

Psychometric Properties of Proposed Serious Illness Care Quality Measures and Component Items Top-box responses are noted in bold. Program level scores are adjusted for case-mix (response percentile, education, age, diagnosis, proxy use, self-reported functional status, and physical and mental health) and mode of survey administration. Adjusted program-level scores are calculated for each item assuming each program had population-average case mix and mode of survey administration. Adjusted program-level composite scores are then generated as the average of the adjusted program-level item scores for the items that compose the composite measure. Distributions of adjusted program-level scores are calculated restricting to only those programs with 10 or more respondents (28 out of 32 programs). A screening question(s) determines whether this evaluative survey question is applicable to the respondent. Across field test programs, on average, programs perform the best with regard to the proportion of respondents indicating that the people from the program listened carefully to them (84%), gave them the help they needed between visits (84%), and cared about them as a whole person (82%). In contrast, programs show the greatest room for improvement with regard to discussions about how to get help with everyday activities, what their health care options would be if they got sicker, or about what is important in their life, with just 48%, 50%, and 51% of respondents indicating that someone from the program “definitely” talked with them about these topics, respectively. In addition, on average, only 53% of respondents reported that the program “definitely” gave them the help they wanted for feelings of anxiety or sadness. There was considerable variation across programs for many aspects of care, with the widest interquartile ranges for survey items related to getting needed help between visits, getting desired help for symptoms, planning for care, and support for family and friends (data not shown). The five composites are moderately correlated (Table 3). Intercorrelations are highest between Care Coordination and other composites (r = 0.446 to r = 0.615), reflecting the core role that SIPs play in coordinating care through communication, planning, and assessing and managing symptoms.
Table 3.

Correlations among Proposed Serious Illness Care Composite Measures

 CommunicationCare coordinationHelp for symptomsPlanning for careSupport for family and friends
Communication1    
Care coordination0.6151   
Help for symptoms0.4040.4801  
Planning for care0.3920.5790.4331 
Support for family and friends0.4430.4460.4270.4251

All correlations are significant at p < 0.001.

Correlations among Proposed Serious Illness Care Composite Measures All correlations are significant at p < 0.001. Across the 28 programs that had at least 10 respondents, there is adequate variation in measure scores, as indicated by the ICC and reliability estimates shown in Table 4. Six of the seven proposed measures exhibit acceptable program-level reliability of 0.70 or greater at 100 respondents; the remaining measure, Overall Rating, nears the threshold at reliability of 0.67 at 100 respondents.
Table 4.

Intraclass Correlation and Reliability of Proposed Serious Illness Care Quality Measures

MeasureICCReliability at 100 measure respondents
Composite measures
 Communication (5 items)0.0290.75
 Care coordination (5 items)0.0340.78
 Help for symptoms (3 items)0.0270.73
 Planning for care (3 items)0.0360.79
 Support for family and friends (2 items)0.0390.80
Global measures
 Rating of program (1 item)0.0200.67
 Willingness to recommend (1 item)0.0320.77

All calculations use top-box scoring. ICCs are adjusted for case mix and mode of survey administration. Mean percentages of survey respondents completing measure are calculated as program-level averages (i.e., the average of each program's average percent of respondents completing the given measure) using all survey respondents within each included program. Reliabilities are calculated with the Spearman-Brown prophecy formula, reliability = (k × ICC)/[(k − 1)(ICC) +1], where k is the number of completed surveys per program. Values were calculated after restricting to programs with 10 or more respondents (28 out of 32 programs). The mean proportion of respondents responding to measures ranged from 95% to 97% for all measures with the exception of Help for Symptoms and Support for Family and Friends, for which an average of 75% and 78% of respondents responded.

ICC, intraclass correlation.

Intraclass Correlation and Reliability of Proposed Serious Illness Care Quality Measures All calculations use top-box scoring. ICCs are adjusted for case mix and mode of survey administration. Mean percentages of survey respondents completing measure are calculated as program-level averages (i.e., the average of each program's average percent of respondents completing the given measure) using all survey respondents within each included program. Reliabilities are calculated with the Spearman-Brown prophecy formula, reliability = (k × ICC)/[(k − 1)(ICC) +1], where k is the number of completed surveys per program. Values were calculated after restricting to programs with 10 or more respondents (28 out of 32 programs). The mean proportion of respondents responding to measures ranged from 95% to 97% for all measures with the exception of Help for Symptoms and Support for Family and Friends, for which an average of 75% and 78% of respondents responded. ICC, intraclass correlation. Models including all composites account for 45% of the variance in overall rating and 45% of the variance in willingness to recommend the program, after adjusting for case-mix and survey mode. Among the models that consider each composite's association individually, the Care Coordination and Communication composites are the two strongest predictors of overall rating of care (β = 0.56 and β = 0.56, respectively) and willingness to recommend (β = 0.57 and β = 0.56, respectively) as shown in Table 5. These results indicate, for example, that compared to a respondent who selected “Always” (the most favorable response) to all questions in the Communication composite, a respondent who did not select “Always” in response to any of these questions would have a 56% lower chance of rating the program a 9 or 10 out of 10, or of definitely recommending the program to family and friends.
Table 5.

Using Serious Illness Survey Composites to Predict Global Measure of Experience with the Serious Illness Program

 Models assessing composites one-by-one
Model including all composites
EstimateEstimate
Overall rating
 Communication0.557[***]0.292[***]
 Care coordination0.560[***]0.154[***]
 Help for symptoms0.456[***]0.139[***]
 Planning for care0.522[***]0.232[***]
 Support for family and friends0.442[***]0.085
Willingness to recommend
 Communication0.563[***]0.295[***]
 Care coordination0.565[***]0.171[***]
 Help for symptoms0.438[***]0.105[*]
 Planning for care0.501[***]0.190[**]
 Support for family and friends0.471[***]0.137[**]

Models are adjusted for case mix and mode of survey administration.

p < 0.05; **p < 0.01; ***p < 0.001.

Using Serious Illness Survey Composites to Predict Global Measure of Experience with the Serious Illness Program Models are adjusted for case mix and mode of survey administration. p < 0.05; **p < 0.01; ***p < 0.001. In models containing all composites, Communication is the strongest predictor of these outcomes (β = 0.29 and β = 0.30, respectively, for overall rating and willingness to recommend), with Planning for Care the second strongest predictor (β = 0.23 and β = 0.19, respectively).

Discussion

Nearly one in seven Americans is seriously ill; their care accounts for more than half of all health care expenditures.[35] Standardized, rigorously tested measures of care quality are needed to promote high-quality, person- and family-centered care for this vulnerable group. Our field test provides evidence of the reliability and validity of the proposed survey-based measures to assess the quality of home-based illness care provided by SIPs from the perspective of patients and their families. These measures address a critical gap in measurement of the quality of serious illness care,[13] particularly with regard to assessment of patients' perspectives on how well their care providers understand their priorities and support their decision making.[36] In keeping with findings from hospice care and other settings, one of the strongest predictors of overall ratings of SIP care is Communication,[17,37-39] with Care Coordination and Planning for Care the second most important. The strong relationship between these domains and patients' overall assessments of care reflect the key roles that SIPs play in helping seriously ill individuals understand the care options available to them, and acting as a central hub of information regarding medications, medical history, and supports for activities of daily living. A key challenge of providing high-quality home-based serious illness care is tailoring services to meet the specific needs and preferences of patients across a range of disease trajectories and functional status. For example, while most SIP patients look to their programs for help with activities of daily living, 21% of patients reported that they did not want help from the program for everyday activities. Of those respondents reporting that they had pain, trouble breathing, and anxiety or sadness, 6%, 7%, and 19%, respectively, reported that they did not want help from the program for those symptoms. These findings speak to the value of quality assessments that examine a range of care services that may be important to different groups of SIP patients. Nearly half of our sample relied on a family member or friend to complete the survey for them or to assist them with reading the questions or writing responses. This underlines the importance of proxy assistance to represent seriously ill patients whose cognitive or other impairments interfere with survey response tasks. Family caregivers' responses have moderate-to-high agreement with patient responses regarding observable aspects of care.[40] Although we recruited SIPs for participation in the field test from an extensive list of programs, there is no complete directory of all SIPs in the United States, so it is possible that the programs participating in our field test were not representative of all SIPs. In particular, the smallest programs did not have sufficient sample to participate. Our overall survey response rate of 36.4% was similar to or better than that of other patient and family surveys in routine use.[41,42] Notably, the response rate to surveys administered by mail with telephone follow-up (42.5%) was substantially higher than the rate for surveys administered by mail only (30.4%). Mixed mode administration also increased the likelihood that those with Medicaid insurance responded to the survey.[43] We reduce nonresponse by allowing proxies to respond on behalf of patients who are not able to respond for themselves, and address nonresponse bias by adjusting for differences in case mix across programs,[28] which addresses nonresponse bias associated with these characteristics and allows for fair comparisons between programs.[29,44] Future research and survey efforts should continue to investigate approaches for promoting response from hard-to-reach populations.

Conclusion

We developed and tested a Serious Illness Survey that assessed of a broad range of care experiences that both seriously ill individuals and experts deem most important for high-quality serious illness care in the home. We evaluated the survey with patients receiving care in a diverse set of home-based SIPs across the United States. We find support for the reliability and validity of the Serious Illness Survey for measuring and comparing care experiences. Results from the survey can be used to inform quality improvement efforts, monitor care quality over time, compare quality between programs, and assess the effectiveness of new initiatives that provide access to home-based serious illness care. Additional work is underway to test a version of the Serious Illness Survey to assess experiences with a broader range of providers caring for those with serious illness.
  16 in total

1.  Patterns of unit and item nonresponse in the CAHPS Hospital Survey.

Authors:  Marc N Elliott; Carol Edwards; January Angeles; Katrin Hambarsoomians; Ron D Hays
Journal:  Health Serv Res       Date:  2005-12       Impact factor: 3.402

2.  Development, implementation, and public reporting of the HCAHPS survey.

Authors:  Laura A Giordano; Marc N Elliott; Elizabeth Goldstein; William G Lehrman; Patrice A Spencer
Journal:  Med Care Res Rev       Date:  2009-07-28       Impact factor: 3.929

3.  Accelerating Improvement and Narrowing Gaps: Trends in Patients' Experiences with Hospital Care Reflected in HCAHPS Public Reporting.

Authors:  Marc N Elliott; Christopher W Cohea; William G Lehrman; Elizabeth H Goldstein; Paul D Cleary; Laura A Giordano; Megan K Beckett; Alan M Zaslavsky
Journal:  Health Serv Res       Date:  2015-04-08       Impact factor: 3.402

4.  Challenges Of Measuring Quality Of Community-Based Programs For Seriously Ill Individuals And Their Families.

Authors:  Joan M Teno; Rebecca Anhang Price; Lena K Makaroun
Journal:  Health Aff (Millwood)       Date:  2017-07-01       Impact factor: 6.301

5.  Quality Measurement of Serious Illness Communication: Recommendations for Health Systems Based on Findings from a Symposium of National Experts.

Authors:  Justin J Sanders; Joanna Paladino; Erica Reaves; Hannah Luetke-Stahlman; Rebecca Anhang Price; Karl Lorenz; Laura C Hanson; J Randall Curtis; Diane E Meier; Erik K Fromme; Susan D Block
Journal:  J Palliat Med       Date:  2019-11-13       Impact factor: 2.947

6.  Components of care vary in importance for overall patient-reported experience by type of hospitalization.

Authors:  Marc N Elliott; David E Kanouse; Carol A Edwards; Lee H Hilborne
Journal:  Med Care       Date:  2009-08       Impact factor: 2.983

7.  Drivers of overall satisfaction with primary care: evidence from the English General Practice Patient Survey.

Authors:  Charlotte A M Paddison; Gary A Abel; Martin O Roland; Marc N Elliott; Georgios Lyratzopoulos; John L Campbell
Journal:  Health Expect       Date:  2013-05-30       Impact factor: 3.377

8.  Improving Care Experiences for Patients and Caregivers at End of Life: A Systematic Review.

Authors:  Denise D Quigley; Sara G McCleskey
Journal:  Am J Hosp Palliat Care       Date:  2020-06-19       Impact factor: 2.500

9.  Adjusting for Patient Characteristics to Compare Quality of Care Provided by Serious Illness Programs.

Authors:  Maria DeYoreo; Rebecca Anhang Price; Cheryl K Montemayor; Anagha Tolpadi; Melissa Bradley; Danielle Schlang; Joan M Teno; Paul D Cleary; Marc N Elliott
Journal:  J Palliat Med       Date:  2022-01-21       Impact factor: 2.947

10.  The effect of performance-based financial incentives on improving patient care experiences: a statewide evaluation.

Authors:  Hector P Rodriguez; Ted von Glahn; Marc N Elliott; William H Rogers; Dana Gelb Safran
Journal:  J Gen Intern Med       Date:  2009-10-14       Impact factor: 5.128

View more
  2 in total

1.  Adjusting for Patient Characteristics to Compare Quality of Care Provided by Serious Illness Programs.

Authors:  Maria DeYoreo; Rebecca Anhang Price; Cheryl K Montemayor; Anagha Tolpadi; Melissa Bradley; Danielle Schlang; Joan M Teno; Paul D Cleary; Marc N Elliott
Journal:  J Palliat Med       Date:  2022-01-21       Impact factor: 2.947

2.  Adding telephone follow-up can improve representativeness of surveys of seriously ill people.

Authors:  Maria DeYoreo; Rebecca Anhang Price; Melissa A Bradley; Danielle Schlang; Cheryl K Montemayor; Anagha Tolpadi; Paul D Cleary; Joan M Teno; Marc N Elliott
Journal:  J Am Geriatr Soc       Date:  2022-02-28       Impact factor: 7.538

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.