Literature DB >> 34084946

Development of a specialty intensity score to estimate a patient's need for care coordination across physician specialties.

Ashley Hodgson1, Thomas Bernardin1, Benjamin Westermeyer1, Ella Hagopian1, Tyler Radtke1, Ahmed Noman1.   

Abstract

BACKGROUNDS AND AIMS: This article develops a Specialty Intensity Score, which uses patient diagnosis codes to estimate the number of specialist physicians a patient will need to access. Conceptually, the score can serve as a proxy for a patient's need for care coordination across doctors. Such a measure may be valuable to researchers studying care coordination practices for complex patients. In contrast with previous comorbidity scores, which focus primarily on mortality and utilization, this comorbidity score approximates the complexity of a patient's the interaction with the health care system.
METHODS: We use 2015 inpatient claims data from the Centers for Medicare and Medicaid Services to model the relationship between a patient's diagnoses and physician specialty usage. We estimate usage of specialist doctors by using a least absolute shrinkage and selection operator Poisson model. The Specialty Intensity Score is then constructed using this predicted specialty usage. To validate our score, we test its power to predict the occurrence of patient safety incidents and compare that with the predictive power of the Charlson comorbidity index.
RESULTS: Our model uses 127 of the 279 International Classification of Disease, 10th Revision, Clinical Modification (ICD-10-CM) diagnosis subchapters to predict specialty usage, thus creating the Specialty Intensity Score. This score has significantly greater power in predicting patient safety complications than the widely used Charlson comorbidity index.
CONCLUSION: The Specialty Intensity Score developed in this article can be used by health services researchers and administrators to approximate a patient's need for care coordination across multiple specialist doctors. It, therefore, can help with evaluation of care coordination practices by allowing researchers to restrict their analysis of outcomes to the patients most impacted by those practices.
© 2021 The Authors. Health Science Reports published by Wiley Periodicals LLC.

Entities:  

Keywords:  care coordination; comorbidity score; inpatients; medical specialization; patient safety incidents

Year:  2021        PMID: 34084946      PMCID: PMC8142625          DOI: 10.1002/hsr2.303

Source DB:  PubMed          Journal:  Health Sci Rep        ISSN: 2398-8835


INTRODUCTION

Health services researchers and policy‐makers have increasingly acknowledged the importance of improving care coordination for patients with multiple chronic conditions. , Multiple chronic condition patients spend 65% of all health care dollars and 95% of Medicare dollars. Yet, these patients receive substandard care, often due to complications of the coordination process. One definition of care coordination is “the deliberate organization of patient care activities between two or more participants (including the patient) involved in a patient's care to facilitate the appropriate delivery of health care services.” The complexity of these patient's interaction with the health care system may put the patient's health at greater risk, particularly if different doctors treating them provide conflicting care plans or fail to communicate well with each other or with the patient. For these reasons, care coordination has become a focus among policy‐makers and health care administrators alike. The majority hospitals employ a variety of care management tools and techniques such as case managers, predictive analytic tools, checklists, visit summaries, conversations prompts in the medical records, and more. Insurers incentivize good care coordination practices through bundled payment and pay‐for‐performance systems, although evidence is mixed about the effectiveness of these at improving care coordination. , , , The Affordable Care Act included incentives for the development of Accountable Care Organizations in part as a way of promoting innovation and accountability for care coordination practices. And not just the United States, but countries across the globe have highlighted care coordination as a goal. , , , Recognizing the importance of developing measurements of care coordination, the Agency for Healthcare Research and Quality undertook a project called the Care Coordination Measures Atlas, which summarized the available literature and identified 64 existing instruments for measuring the quality of care coordination. This project was most recently updated in June 2014, when it introduced a section on care coordination measures that can be constructed from electronic health records (EHR). Measurements that rely on EHR have numerous advantages, including the limited data collection burden and ease of aggregating across broader populations. While the project uncovered 26 measures of care coordination developed from EHR data, all of them required more specific information beyond simply the International Classification of Disease, 10th Revision, Clinical Modification (ICD‐10‐CM) codes that make up the bulk of many widely available data sets. Since the release of the Care Coordination Measures Atlas, a 2018 Veterans Affairs conference focused on care coordination, and published a report identifying gaps in the available measures of care coordination. This report highlighted a continued need to identify which patients could benefit most from care coordination practices. Our project seeks to fill that gap identified by Agency for Healthcare Research and Quality and the 2018 Veterans Affairs coordination conference by providing a measure of a patient's need for care coordination that can be used by researchers with access to widely available datasets with ICD‐10‐CM patient diagnostic codes. Specialty Intensity Score aims to allow researchers to identify which patients are most likely to require care from multiple medical specialists. The score serves as an estimate for a patient's need for care coordination and can be combined with coordination‐sensitive outcome measures to assess the quality of care‐coordination within and across health care facilities. By separating out patients likely to need the most care coordination, researchers can study coordination‐sensitive outcomes with greater precision. Coordination‐sensitive outcomes may include death rate among patients with low‐mortality diagnoses, hospital‐acquired infections, and hospital readmissions. To check for advancements in the field since the 2014 Agency for Healthcare Research and Quality report on care coordination instruments, we conducted a structured literature review in search of more recent measures of care coordination, described in the appendix. The process uncovered a number of process‐oriented and survey‐based care coordination measures. , , However, of these new measures, only one aims to estimate a patient's need for care coordination, the Care Coordination Tier Assessment Tool. , However, this instrument requires individual assessment by a person reviewing a patient's case, and this person must be able to account for duration of their conditions and the care team available to them. This would not be possible for researchers who only have access to large databases of hospital discharges or insurer claims. Our Specialty Intensity Score is intended for use by health services researchers who work with widely available datasets, like the Agency for Healthcare Research and Quality National Inpatient Sample database or state‐level hospital discharge datasets. Because the Specialty Intensity Score is a version of a comorbidity index, we wanted to compare it to existing such measures, particularly those constructed using similar data. We therefore conducted a structured literature review in search of existing comorbidity instruments. Our findings appear in Table 1, and the process for identifying these measures is described in an appendix. All measures in the table are instruments that aggregate patient comorbidities into scores that predict patient outcome or usage patterns. None of the existing comorbidity scores was proxies for the amount of coordination patients would need, and none used number of unique physician specialties as the response variable of interest. Most of the scores were constructed using mortality or prognosis as the main response variable, with various patient bases and techniques for score design. The most commonly cited such score is the Charlson Comorbidity Index. A number of the scores were utilization based, but measured utilization by cost, hospitalization, readmission, or resource use, rather than by number of medical specialties involved in a patient's care. No existing comorbidity instrument was explicitly designed to estimate the complexity of a patient's interaction with the health care system, like Specialty Intensity Score that we develop in this article.
TABLE 1

Comorbidity scores in the literature

Comorbidity Score or IndexYear of PublicationPharmacy‐Based Data?Response Variable(s) of InterestTarget Patient Base
Cumulative Illness Rating Scale 25 1968NoDegree of impairmentGeneral
Kaplan/Feinstein index 26 1974NoMortalityPatients with diabetes mellitus
Charlson comorbidity index 27 1987NoMortalityGeneral
Diagnostic cost groups 28 1988NoCostGeneral
Chronic disease score 29 1992YesPhysician ratings of physical disease severity, patient‐rated health status, mortality and hospitalization
Ambulatory care groups 30 1992NoAmbulatory care resource useGeneral
Number of prescribed medications 29 1992YesMortality, hospitalization, physician‐rated disease severity, patient‐rated health statusHigh utilizers of ambulatory health care
Satariano index 31 1994NoSurvivalBreast cancer patients
Index of co‐existent diseases 32 1995NoPostoperative complications and 1‐year health‐related quality of lifePatients undergoing total hip replacement
Total illness burden index 33 1995NoFunctional status outcomesGeneral
Elixhauser comorbidity score 34 1998 No MortalityGeneral
Silliman score 35 1999NoMortality, physical functionBreast cancer patients
Klabunde outpatient and inpatient indices 36 2000NoMortalityBreast cancer and prostate cancer patients
Geriatric index of comorbidity 37 2002NoCognitive status, depressive symptoms, functional status, somatic healthElderly patients
Frailty index 38 2002NoFrailty, ability to react to stressGeneral
Washington university head and neck comorbidity index 39 2002NoPrognosisPatients with head and neck squamous cell carcinoma
RxRisk score 40 2003YesCostVeterans health administration population
Medication regimen complexity index 41 2004YesOutcomes measuresPatients with moderate to severe chronic obstructive pulmonary disease
Multipurpose Australian comorbidity scoring system 42 2005NoMortality, hospital readmissionGeneral
Simplified comorbidity index 43 2005NoPrognosisGeneral
Rheumatic disease comorbidity index 44 2007NoMortality, hospitalizationPatients with rheumatoid arthritis
Pharmacy‐based comorbidity index 45 2013YesHospitalizationGeneral
CirCom score 46 2014NoMortalityCirrhosis patients'
Multimorbidity index 47 2015NoHealth‐related quality of lifePatients with rheumatoid arthritis
Drug derived complexity index 48 2016YesDeath, hospitalization, readmissionGeneral
Medicines comorbidity index 49 2017YesDeath, hospitalizationGeneral
Specialty intensity scoreNoNumber of doctors with unique specialtiesGeneral
Comorbidity scores in the literature In the sections that follow, we describe the process for developing the Specialty Intensity Score, by using ICD‐10‐CM codes to estimate the number of doctors with unique specialties a patient sees. We then test whether the Specialty Intensity Score is empirically different from the commonly used Charlson Comorbidity Index by quantifying the predictive value of each in estimating a patient's probability of experiencing a patient safety incident in the hospital. Our results show that the two scores are sufficiently distinct, and that the Specialty Intensity Score has higher predictive power for safety incidents.

METHODS

Data

The study utilized data from the Center for Medicare and Medicaid Services (CMS) Limited Data Set of Standard Analytical Files. This included Medicare Part A and Part B medical claims for a 5% random sample of Medicare beneficiaries. The specific data utilized in the project come from claims from the Inpatient and Carrier files for the year of 2015. The inpatient file contained ICD‐9‐CM and ICD‐10‐CM diagnosis codes, which indicate the health conditions associated with patients' inpatient visits. The inpatient file also contained National Provider Identifier (NPI) codes for the attending, operating, and other physicians who provided services to patients during their inpatient visits. The carrier file contained NPI codes for referring and performing physicians. Specialties of the referring, performing, attending, operating, and other physicians were determined by mapping their NPI codes to their provider taxonomy codes, using data from the National Plan and Provider Enumeration System. The overarching goal of this article is to use patient diagnoses to estimate the number of specialties that will be required during a hospital visit and then to use this specialty estimate as a score of the patient's need for medical coordination (the SI score). We aim to validate the score by testing its success at predicting the occurrence of health complications. This analysis requires three types of data: (a) a list of diagnosis categories, (b) a measure of physician medical specialty usage, and (c) a measure of inpatient medical complications. The following three sections explain how we used 2015 CMS inpatient data to create these three types of variables. All analyses were conducted using R statistical software. Before undertaking work with this deidentified administrative data, all members of the research team underwent training in research ethics through the Collaborative Institutional Training Initiative and reviewed institutional policies for work with data, as required by the Institutional Review Board at our institution. We ensured that our research design and processes met the protocol for working with patient administrative data, which required protection of data but did not require direct consent from patients in the data set.

Clustering diagnosis categories

With 14 500 ICD‐9‐CM diagnosis codes and 70 000 ICD‐10‐CM codes, it is neither practical nor insightful to use individual diagnosis codes as the explanatory variables in a model of specialty usage. Many codes are nearly identical to each other, so treating each code as a distinct covariate would cost a large amount of computational power for very little explanatory gain. Therefore, it is useful to group the individual diagnosis codes into clinically meaningful categories and to use these categories as the explanatory variables. For this article, we experimented with three different diagnosis categorizations. We ultimately chose to use the World Health Organization subchapter categorization, since it was constructed based on medical classification rather than cost considerations. See the appendix for the other classifications we considered.

Specialty usage

The Centers for Medicare and Medicaid Services inpatient claims data include information on patients' utilization of referring, performing, attending, operating, and other physicians. A patient's usage of a physician is recorded using the physician's unique National Provider Identifier (NPI) code. Because each physician is allowed to list more than one specialty, and we do not have a way of knowing which specialty a patient saw that physician for, we devised several different methods for estimating the number of physician specialties that each patient utilized. We used an upper‐bound and lower‐bound approach and ran robustness checks across the different methods, which are described in more detail in an appendix. Table 3 gives descriptive statistics about the upper and lower bound of doctor specialties that a patient utilized. Table 4 compares results under both the lower‐bound and upper‐bound estimates of physician specialties utilized.
TABLE 3

Descriptive statistics

Lower Bound Specialty CountUpper Bound Specialty CountPatient Safety Event IndicatorCharlson Comorbidity Index
Mean4.044.370.0023982.55
Median4402
Variance3.103.780.0023934.60
Minimum0000
Maximum1516119

Note: N = 277 264.

TABLE 4

LASSO‐penalized Poisson model summaries

Lower Bound Specialty CountUpper Bound Specialty Count
Number of predictors12795
Lambda0.009920.01982
Root mean squared error1.5481.696
Root mean squared error/standard deviation0.8790.872

Inpatient complications

We utilized the Agency for Healthcare Research and Quality's Patient Safety Indicators to estimate the number of medical complications each patient experienced while in the hospital. Table 2 gives the list of Patient Safety Indicators that we included in the study. Table 3 provides descriptive statistics on the prevalence of patient safety events, as observed in our data.
TABLE 2

Patient safety indicators

PSI 02: Death in low‐mortality diagnosis‐related groups
PSI 03: Pressure ulcer
PSI 06: Iatrogenic pneumothorax
PSI 07: Central venous catheter‐related blood stream infection
PSI 08: In hospital fall with hip fracture
PSI 09: Perioperative hemorrhage or hematoma
PSI 10: Postoperative acute kidney injury requiring dialysis
PSI 11: Postoperative respiratory failure
PSI 12: Perioperative pulmonary embolism or deep vein thrombosis
PSI 13: Postoperative sepsis
PSI 14: Postoperative wound dehiscence
Patient safety indicators Descriptive statistics Note: N = 277 264.

Methodological approach

To predict the average number of specialist doctors utilized with the diagnosis subchapter indicator variables, the first stage of our analysis employed a least absolute shrinkage and selection operator (LASSO) penalized Poisson model (Table 4). This approach is appropriate and valuable in our context for several reasons. First, the Poisson model assumes that the response variable is count data, as opposed to binary or continuous. Our response variable, the number of specialties seen in an inpatient visit, is a count variable. The Poisson model also assumes that the mean and variance of the response are roughly equal. Therefore, the Poisson generalized linear model is appropriate for our specialty count variables. Second, using a shrinkage method is appropriate because our data are very wide: there are 279 predictors. Generalized linear models tend to have problems with multicollinearity when there are such a large number of predictors. LASSO combats this problem by both shrinking the size of predictor coefficients and performing subset selection. LASSO‐penalized Poisson model summaries In the second stage of our analysis, we evaluate the validity of the SI score by testing its power in predicting the occurrence of patient safety events in a logistic regression. We use residual deviance and the Akaike Information Criterion as metrics of the explanatory power of our model. After estimating the simple logistic regression, we introduce the Charlson index as another covariate in the logistic regression, for comparison against the SI Score. We use likelihood ratio tests to determine if the Charlson explains any additional variation in patient safety events and variance inflation factor (VIF) tests to determine if the Charlson and SI score are significantly collinear. We carry out this entire process twice: once for the SI score created using the lower bound specialty count variable and once for the SI score created using the upper bound.

RESULTS

Our analysis yielded two results. The first is the generation of the Specialty Intensity Score, which is a measure that we designed for health services researchers to utilize in their own studies, particularly studies that involve multiple chronic condition patients. The second result of our study estimates the power of the Specialty Intensity Score for predicting the occurrence of a negative patient safety event during a hospital stay, and the comparison of that score to the existing Charlson Comorbidity Score.

Predicting specialty usage with diagnosis subchapters

Researchers can create SI Scores for patients in their own data using Table C1 from our appendix. To calculate the SI Score for a given patient, we exponentiate the sum of the intercept and the coefficients of the patient's diagnosis subchapters. For example, if there is a patient with diagnoses in subchapters 59, 111, and 116, then we exponentiate the sum of 1.0826 (the intercept), 0.0422 (the Sub59 coefficient), 0.0043 (the Sub111 coefficient), and 0.0390 (the Sub116 coefficient). As a result of this calculation, we estimate that this type of patient will use an average of 3.216 specialties. This is the patient's SI score using the lower bound specialty count variable. The lower bound specialty count model achieves a root mean squared error of 1.548 on the holdout data. This means that if we only count physicians' primary specialty in our calculation of patient specialty usage, our model's predictions of specialty usage are an average of 1.548 away from the patient's actual usage. For reference, the SD of SpecCount1 variable in the holdout data is 1.761. Because our model's root mean squared error is smaller than the SD by a factor of 0.879, we can conclude that our model is superior to the naive estimator.

Predicting patient safety event occurrence with predicted specialty usage

Although both the SI score and Charlson index are significant predictors of patient safety events, the models show that the SI score is the more important one, both in terms of coefficient size and deviance reduction (Table 5). This was expected; the Patient Safety Indicators measure iatrogenic hospital complications, which are problems with the hospital's care, not problems with the patient's body. It is intuitive that these complications would be more strongly associated with a score of care coordination need, such as the SI score, than a score of biological frailty, such as the Charlson.
TABLE 5

Logistic regression model summaries

Patient Safety Event ~ SI Score 1Patient Safety Event ~ SI Score 1 + CharlsonPatient Safety Event ~ SI Score 2Patient Safety Event ~ SI Score 2 + Charlson
Intercept−9.85***−9.76***−9.75***−9.66***
SI score0.87***0.92***0.79***0.83***
Charlson−0.11***−0.11***
Residual deviance4567.94548.34582.74562.9
AIC4571.94554.34586.74568.9
VIF1.096251.10069

Note: *P < .05, **P < .01, ***P < .001.

Logistic regression model summaries Note: *P < .05, **P < .01, ***P < .001. Whether we used the lower or upper bound of specialty usage, the logistic regressions demonstrated that the SI score and Charlson index are significant, unique covariates of patient safety event occurrence. This shows that our analysis is robust to the inclusion of physicians' secondary specialty in the specialty count calculation. That being said, we recommend that researchers use SI Score 1 over SI Score 2. This choice makes intuitive and empirical sense for several reasons. First, patients are more likely to use just a physician's primary specialty than they are to use both their primary and secondary specialties during a visit. Thus, the likelihood that we are overcounting by including both physician specialties in a calculation of total patient specialty usage is greater than the likelihood that we are undercounting by disregarding secondary specialties. Second, because we consider the ability of the SI score to explain patient safety events to be the SI score's validity as a measure of care coordination need, we want to choose the SI score that has the strongest relationship with patient safety event occurrence. The logistic models with SI Score 1 achieve lower deviances and Akaike Information Criterions than the models with SI Score 2, both with and without the inclusion of the Charlson index. Thus, SI Score 1 is empirically a better proxy for patient need for care coordination. For these reasons, we recommend using SI Score 1, and we include the coefficient table for the SI Score 1 model in the appendix. To determine if the Charlson explains a significant amount of new variation in the probability of patient safety events when introduced into these logistic regressions, we use variance inflation factors (VIFs) and likelihood ratio tests (LRTs). The VIF helps us understand if there is an excessive amount of multicollinearity between the SI score and Charlson variables. In other words, it indicates if the SI score and Charlson index are distinct scores. If the VIF is very high, then there is no need for researchers to account for both scores because they are very similar. However, if the VIF is low, then the two indexes are unique. In the logistic regressions with the Charlson, the VIF ranges from 1.096 with SI Score 1 to 1.101 with SI Score 2, both of which are well below the rule of thumb threshold of 5. We conclude that the SI score and Charlson index are distinct scores, each offering unique information about inpatient visits. The variance inflation factors (VIFs) can show that multicollinearity is an issue, but they cannot show the usefulness of the SI score and Charlson index in explaining patient safety event occurrence. For this, we use LRTs. These tests compare the residual deviance of the models without the Charlson to the models including it. If adding the Charlson to the logistic regression significantly reduces the residual deviance, then we can conclude that adding the Charlson explains a significant amount of additional variation in the probability of a patient safety event. For the SI Score 1 model, the difference in deviance is 19.59 (P < 0.001). For the SI Score 2 model, the deviance difference is 19.83 (P < 0.001). Thus, we can conclude that the Charlson does contribute significant additional explanatory power to models predicting patient safety events.

LIMITATIONS

Our use of the Centers for Medicare and Medicaid Services claims dataset limits the study for several reasons. All patients in our dataset are Medicare patients, who are older and sicker than the general population. It is possible that optimal care coordination for Medicare patients may be systematically different than optimal care coordination for younger patients. It is possible that older adults are less likely to see a specialist for additional conditions compared with younger people, particularly if some of their diagnoses are long‐standing problems that they have dealt with separately in the past. On the other hand, it may also be possible that this population is more likely to see a specialist compared to younger patients based on the fact that they are more likely to be insured. Therefore, our estimates could be biased in either direction relative to the U.S. population as a whole. The fact that our data came from 2015 may also make it less valid as time goes on. As medical guidelines change over time, doctors may develop different practices for recommending when a patient should consult with a new specialist. Also, changes in telemedicine and coordination practices resulting from the COVID‐19 pandemic may dramatically alter the average number of specialists that are considered standard for patients to see, rendering our scoring system outdated. There are several assumptions that we had to make when constructing our measure. First, we assume that patients saw a doctor from every medical specialty that they needed expertise from. This assumption could be violated by good care coordination practices that some organizations engage in. For example, if a patient's primary care doctor made phone calls to specialists to get advice for that patient, then that patient would not appear to have consulted these doctors in our dataset, and we would underestimate the number of areas of medical expertise they needed. Conversely, we assumed that patients needed special medical expertise from every doctor that they saw. It is possible that some of these doctors were checking in on the patients for routine medical check‐ups, unrelated to their specialties. Finally, we were unable to observe which specialty the patient was utilizing. Doctors could list up to two specialties and we had no way of knowing which of the two specialties was relevant to the patient. To account for this, we devised a maximum estimate of doctors seen and a minimum estimate and compared these approaches, as described in previous sections.

DISCUSSION

In this article, we have developed the Specialty Intensity Score, which uses ICD‐10‐CM codes to score a patient's need for medical expertise drawn from different, unique medical specialties. We have done this by using clusters of ICD‐10‐CM codes to predict the number of specialist doctors from unique specialties that a patient sees during a hospital visit. To test the validity of our Specialty Intensity Score, we checked whether our score had power in predicting a patient's probability of experiencing a patient safety event, and whether that predictive power was independent of the Charlson comorbidity index. Our measure has significantly more predictive power in estimating a patient's probability of a patient safety event compared to the existing Charlson comorbidity index. This finding validates the need for our score as an independent source of meaning for researchers exploring issues relating to complex hospital patients with many comorbidities. The reason our Specialty Intensity Score adds something unique over existing measures of comorbidity listed in Table 1. Scores developed using mortality, prognosis, functionality, or patient health theoretically capture a patient's biological vulnerability. Scores using utilization, such as resource use, cost, and hospitalization, capture the intensity of investment, but not the complexity of care. Our score, on the other hand, captures an important feature of the nature of the patient's interaction with the health care system: how many doctors are likely to be needed for their care. Therefore, our measure is uniquely positioned for studying care coordination, when combined with coordination‐sensitive outcomes. Patients needing to draw medical knowledge from a broader range of medical specialties will either need to see many different specialist doctors or else find some alternative way of getting personalized medical information from those specialties. For example, one alternative approach would be for the patient to see a primary care doctor or hospitalist and for that hospitalist to consult separately with specialist doctors as needed. Another model would be for a team of specialists to each meet with the patient and also meet separately as a team with one another, as has sometimes been practiced at the Mayo Clinic. It might also be possible for electronic medical records to facilitate coordination across a team of practitioners or a non‐MD patient advocate to assist in the coordination process. Some researchers have proposed the use of systems engineering to facilitate and design optimal transfer of medical information across multiple disciplines of expertise. , A systems engineer, for example, might put together checklists for transfer of information between patients or might determine which specialist doctor should be seen first or which specialist's advice should override another's when there is a conflicting recommendation. The Specialty Intensity Score is designed to be used in other studies of both efficiency and clinical effectiveness of different care coordination practices. By identifying which patients are the most likely to need knowledge from a broader spectrum of medical specialties, our score can allow researchers to identify the patients most likely to be impacted by good or bad coordination practices and to look at the effects of particular practices on those patients separately from patients with less need for coordination.

FUNDING

This work was supported by St. Olaf College, by the Collaborative Undergraduate Research and Inquiry program at St. Olaf College and by the Frank Gary Endowment of St. Olaf College. The funding sources had no involvement in the study design; collection, analysis, and interpretation of data; writing of the report; or the decision to submit the report for publication.

AUTHOR CONTRIBUTIONS

Conceptualization and Supervision: Ashley Hodgson and Thomas Bernardin Funding acquisition and rResources: Thomas Bernardin Formal Analysis and Investigation: Ashley Hodgson, Thomas Bernardin, Benjamin Westermeyer, Ahmed Noman, Ella Hagopian, Tyler Radtke Thomas Bernardin, Benjamin Westermeyer, Ahmed Noman, Ella Hagopian, Tyler Radtke Methodology and Visualization: Thomas Bernardin, Benjamin Westermeyer, Ahmed Noman, Ella Hagopian, Tyler Radtke Data curation and Software: Benjamin Westermeyer, Ahmed Noman, Ella Hagopian, Tyler Radtke Project administration and Supervision: Ashley Hodgson and Thomas Bernardin Writing—Original Draft Preparation: Ashley Hodgson and Benjamin Westermeyer Writing—Review and Editing: Ashley Hodgson, Benjamin Westermeyer and Tyler Radtke All authors have read and approved the final version of the manuscript. Ashley Hodgson had full access to all of the data in this study and takes complete responsibility for the integrity of the data and the accuracy of the data analysis.

TRANSPARENCY STATEMENT

Ashley Hodgson affirms that this manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

CONFLICT OF INTEREST STATEMENT

The authors of this article claim no conflicts of interest. Appendix S1. Supporting Information Click here for additional data file.
  43 in total

1.  Early results from adoption of bundled payment for diabetes care in the Netherlands show improvement in care coordination.

Authors:  Dinny H de Bakker; Jeroen N Struijs; Caroline B Baan; Joop Raams; Jan-Erik de Wildt; Hubertus J M Vrijhoef; Frederik T Schut
Journal:  Health Aff (Millwood)       Date:  2012-02       Impact factor: 6.301

2.  Chronic care improvement in primary care: evaluation of an integrated pay-for-performance and practice-based care coordination program among elderly patients with diabetes.

Authors:  Peter J Fagan; Alyson B Schuster; Cynthia Boyd; Jill A Marsteller; Michael Griswold; Shannon M E Murphy; Linda Dunbar; Christopher B Forrest
Journal:  Health Serv Res       Date:  2010-09-17       Impact factor: 3.402

3.  A multipurpose comorbidity scoring system performed better than the Charlson index.

Authors:  C D'Arcy J Holman; David B Preen; Natalya J Baynham; Judith C Finn; James B Semmens
Journal:  J Clin Epidemiol       Date:  2005-10       Impact factor: 6.437

4.  Perceptions of Health Care Transition Care Coordination in Patients With Chronic Illness.

Authors:  Monika Lemke; Rachel Kappel; Robert McCarter; Lawrence D'Angelo; Lisa K Tuchman
Journal:  Pediatrics       Date:  2018-04-12       Impact factor: 7.124

5.  Comorbidity measures for use with administrative data.

Authors:  A Elixhauser; C Steiner; D R Harris; R M Coffey
Journal:  Med Care       Date:  1998-01       Impact factor: 2.983

6.  Engineering safer care coordination from hospital to home: lessons from the USA.

Authors:  Partha Das; James Benneyan; Linda Powers; Matthew Carmody; Joanne Kerwin; Sara Singer
Journal:  Future Healthc J       Date:  2018-10

7.  Development and validation of a pharmacy-based comorbidity measure in a population-based automated health care database.

Authors:  Yaa-Hui Dong; Chia-Hsuin Chang; Wen-Yi Shau; Raymond N Kuo; Mei-Shu Lai; K Arnold Chan
Journal:  Pharmacotherapy       Date:  2013-02       Impact factor: 4.705

8.  Development and validation of a Medicines Comorbidity Index for older people.

Authors:  Sujita W Narayan; Prasad S Nishtala
Journal:  Eur J Clin Pharmacol       Date:  2017-09-11       Impact factor: 2.953

9.  Development and validation of the medication regimen complexity index.

Authors:  Johnson George; Yee-Teng Phun; Michael J Bailey; David C M Kong; Kay Stewart
Journal:  Ann Pharmacother       Date:  2004-07-20       Impact factor: 3.154

10.  Coordination versus competition in health care reform.

Authors:  Katherine Baicker; Helen Levy
Journal:  N Engl J Med       Date:  2013-08-14       Impact factor: 91.245

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.