Literature DB >> 28708848

Predicting all-cause risk of 30-day hospital readmission using artificial neural networks.

Mehdi Jamei1, Aleksandr Nisnevich1, Everett Wetchler1, Sylvia Sudat2, Eric Liu1.   

Abstract

Avoidable hospital readmissions not only contribute to the high costs of healthcare in the US, but also have an impact on the quality of care for patients. Large scale adoption of Electronic Health Records (EHR) has created the opportunity to proactively identify patients with high risk of hospital readmission, and apply effective interventions to mitigate that risk. To that end, in the past, numerous machine-learning models have been employed to predict the risk of 30-day hospital readmission. However, the need for an accurate and real-time predictive model, suitable for hospital setting applications still exists. Here, using data from more than 300,000 hospital stays in California from Sutter Health's EHR system, we built and tested an artificial neural network (NN) model based on Google's TensorFlow library. Through comparison with other traditional and non-traditional models, we demonstrated that neural networks are great candidates to capture the complexity and interdependency of various data fields in EHRs. LACE, the current industry standard, showed a precision (PPV) of 0.20 in identifying high-risk patients in our database. In contrast, our NN model yielded a PPV of 0.24, which is a 20% improvement over LACE. Additionally, we discussed the predictive power of Social Determinants of Health (SDoH) data, and presented a simple cost analysis to assist hospitalists in implementing helpful and cost-effective post-discharge interventions.

Entities:  

Mesh:

Year:  2017        PMID: 28708848      PMCID: PMC5510858          DOI: 10.1371/journal.pone.0181173

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


Introduction

Since the Affordable Care Act (ACA) was signed into law in 2010, hospital readmission rates have received increasing attention as both a metric for the quality of care and a savings opportunity for the American healthcare system [1]. Per American Hospital Association, the national readmission rate finally fell to 17.5% in 2013 after holding at approximately 19% for several years [2]. Hospital readmissions cost more than $17 billion annually [3]. According to the Medicare Payment Advisory Committee (MedPAC), 76% of hospital readmissions are potentially avoidable [4]. In response, ACA has required the Center for Medicare and Medicaid Services (CMS) to reduce payments to hospitals with excess readmissions [5]. These penalties should be put in the context of a larger shift in healthcare from the current fee-for-service payment model to a more patient-centered value-based payment model. Formation of Accountable Care Organizations (ACO) and CMS’ Quality Payment Program are examples of this trend that has created financial incentives for hospitals and care providers to address the readmission problem more systematically. Before establishing targeted intervention programs, it is important to first identify those patients with a high risk of readmission. Fortunately, the widespread adoption of EHR systems has produced a vast amount of data that could help predict patients’ risk of future readmissions. Numerous attempts to build such predictive models have been made [6-12]. However, the majority of them suffer from at least one of the following shortcomings: (1) the model is not predictive enough compared to LACE [11], the industry-standard scoring model [13], (2) the model uses insurance claim data, which would not be available in a real-time clinical setting [6,7], (3) the model does not consider social determinants of health (SDoH) [13,8], which have proven to be predictive [14], (4) the model is limited to a particular medical condition, and thus, limited in scope [9,10]. To address these shortcomings, we built a model to predict all-cause 30-day readmission risk, and added block-level census data as proxies for social determinants of health. Additionally, instead of using insurance claims data, which could take up to a month to process, we built our model on the data available during the inpatient stay or at the time of discharge. Generally, using real-time EHR data allows models to be employed in hospital setting applications. Particularly, the authors are interested in applications of this predictive model in supporting data-driven post-discharge interventions to mitigate the risk of hospital readmission.

Methods

Ethics

This study was conducted using health record data (without patient names) taken from 20 hospitals across Sutter Health, a large nonprofit hospital network serving Northern California. The Institutional Review Board (IRB) of Sutter Health (SH IRB # 2015.084EXP RDD) approved the study.

Data preparation

Electronic health records corresponding to 323,813 inpatient stays were extracted from Sutter Health’s EPIC electronic record system. Table 1 shows a summary of the population under study. We had access to all Sutter EHR data, beginning in 2009 and going through the end of 2015. Since many hospitals only recently completed their EHR integration, some 80% of the data comes from 2013–2015 (Fig 1). To ensure data consistency, we limited our hospitals of study to those with over 3,000 inpatient records and excluded Skilled Nursing and other specialty facilities. Fig 2 shows the total number of records for each hospital, and their respective readmission rates.
Table 1

summary of the population under study.

VariableAll hospital visits(n = 335,815)Visits resulting in 30-day readmission(n = 32,718)Visits not resulting in 30-day readmission(n = 303,097)
Admission source (%)
Home93.091.893.1
Outpatient0.10.10.1
Transfer5.15.95.0
Other1.82.11.7
Admission type (%)
Elective27.311.729.0
Emergency43.258.241.6
Urgent28.229.528.0
Other1.40.61.4
Age (%)
0–4429.615.131.1
45–6427.230.026.9
65–8431.539.030.7
85+11.715.911.3
Alcohol users (%)28.325.328.6
Charlson Comorbidity Index, median (IQR)1.0 (4.0)4.0 (4.0)1.0 (3.0)
Discharge location (%)
Home or self care (routine)70.456.271.9
Home under care of home health service organization15.022.314.2
SNF14.621.513.9
Discharge time (%)
Morning (8:00 AM–12:59 PM)25.919.126.7
Afternoon (1:00 PM–5:59 PM)61.465.860.9
Evening (6:00 PM–7:59 AM)12.615.112.4
Drug users (%)6.58.36.3
Female (%)61.954.662.7
Hispanic of any race (%)17.513.817.9
Insurance payer (%)
Commercial46.136.647.1
Medicare51.562.250.3
Self-pay2.21.02.3
Other0.20.20.2
Interpreter needed (%)9.48.89.5
LACE Score, median (IQR)6.0 (5.0)10.0 (5.0)6.0 (6.0)
Length of stay in days, median (IQR)3.0 (3.0)4.0 (5.0)3.0 (3.)
Marital status (%)
Single27.228.627.0
Married/partner48.239.649.1
Divorced/separated8.911.48.6
Widowed14.819.914.3
Other/unknown0.90.40.9
Previous emergency visits, mean (SD)
In the past 3 months0.3 (1.0)0.7 (1.6)0.3 (0.9)
In the past 6 months0.5 (1.5)1.1 (2.4)0.5 (1.4)
In the past 12 months0.8 (2.4)1.7 (3.9)0.7 (2.1)
Previous inpatient visits, mean (SD)
In the past 3 months0.3 (0.7)0.8 (1.3)0.2 (0.6)
In the past 6 months0.4 (1.1)1.1 (2.0)0.3 (0.9)
In the past 12 months0.6 (1.5)1.6 (3.0)0.5 (1.2)
Race (%)
White61.961.961.9
Black11.216.310.7
Other25.921.226.4
Tabak Mortality Score, median (IQR)25.5 (15.6)32.0 (15.8)24.7 (14.9)
Tobacco users (%)12.415.212.1
Fig 1

Total number of records for each hospital under study, and their respective readmission rates.

Fig 2

Data breakdown by hospital admission year.

We studied all inpatient visits to all Sutter hospitals. Hospital transfers and elective admissions were excluded. With this method, a 30-day boolean readmission label was created for each hospital admission. In the current version of their EHR system, Sutter Health captures a few SDoH data fields, such as history of alcohol and tobacco use. We supplemented those data with block-level 2010 census data [15] by matching patients’ addresses. The Google Geocoding API was used to determine the coordinates of each patient’s home address, and a spatial join was performed with the open-source QGIS platform [16] to find respective census tract and block IDs. The data was transferred from Sutter to a HIPAA-compliant cloud service, where it was stored in a PostgreSQL database. An open-source framework [17], written in Python, was built to systematically extract features from the dataset. In total, 335,815 patient records with 1667 distinct features, comprising 15 feature sets, were extracted from the database, as summarized in Table 2.
Table 2

Summary of extracted feature categories, and two sample features per category.

CategoryCountSample features
Encounter Reason604abscess, kidney_stone
Hospital Problems287hcup_category_cystic_fibro, hospital_problems_count
Procedures232px_blood_transf, px_c_section
Medications202inp_num_unique _meds, outp_med_antidotes
Provider119specialty_orthopedic_surgery, specialty_hospitalist_medical
Discharge46length_of_stay, disch_location_home_no_service
Socioeconomic44pct_married, median_household_income
Admission39admission_source_transfer, admission_type_elective
Lab Results26num_abnormal_results, tabak_very_low_albumin
Comorbidities19charlson_index, comor_chf
Basic Demographics16age, if_female
Health History11alcohol_no, tobacco_quit
Utilization10pre_12_month_inpatient, pre_6_month_inpatient
Vitals8bmi, pulse
Payer4insurance_type_medicare, insurance_type_self-pay
Each type of feature (age, length of stay, etc) was independently studied using Jupyter Notebook, an interactive Python tool for data exploration and analysis. Using the pandas [18] library, we explored the quality and completeness of the data for each feature, identified quirks, and came to a holistic understanding of the feature, before using it in our models. Each feature-study notebook provided a readable document mixing code and results, allowing the research team to share findings with one another in a clear and technically reproducible way.

Model training and evaluation

Initially, we experimented with several classic and modern classifiers, including logistic regression, random forests [19], and neural networks. In each case, a 5-fold cross validation, with 20% of the data kept hidden from the model, was performed. We found that the neural network models heavily outperformed other models in performance and recall, with the neural network model being about 10 times faster to train than the random forest model, the second best performing model. Therefore, we focused on optimizing the neural network model. After evaluating a variety of neural network architectures, we found the best-performing model to be a two-layer neural network, containing one dense hidden layer with half the size of the input layer, and dropout nodes between all layers to prevent overfitting. Our model architecture can be seen in Fig 3. To train the neural network, we used the keras framework [20] on top of Google’s TensorFlow [21] algorithm. We trained in batches of 64 samples using the Adam optimizer [22], limiting our training to 5 epochs because we found that any further training tended to result in overfitting, as indicated by validation accuracy decreasing with each epoch while training loss continued to improve.
Fig 3

Neural Network model architecture (Note: Layer sizes are assuming all features are used).

Initially, we trained the model on 1667 features extracted from the dataset. We then retrained the model using the top N features most correlated with 30-day readmission, for different values of N. As shown in Fig 4, the model achieved over 95% of the optimal precision when limited to the top 100 features, suggesting that 100 features is a reasonable cutoff for achieving near-optimal performance at a fraction of the training time and model size required for the full model. Table 3 summarizes the features most correlated with readmission risk.
Fig 4

Comparison of NN model performance (with retrospective validation) vs number of features.

Table 3

Top most correlated features with 30-day readmission.

Category: FeatureLinear Correlation
Utilization: # of inpatient visits in the past 12 months0.226
Utilization: # of inpatient visits in the past 6 months0.224
Comorbidities: Charlson Comorbidity Index (CCI)0.215
Utilization: inpatient visits in the past 3 months0.210
Lab Results: # of lab results marked as ‘low’, ‘high’, or ‘abnormal’0.197
Lab Results: total # of lab results conducted0.160
Lab Results: lab results component of Tabak Mortality Score0.157
Comorbidities: mild liver or renal disease0.149
Utilization: emergency visits component (“E”) of LACE score0.143
Comorbidities: congestive heart failure0.143
Measuring a model’s performance cannot be completely separated from its intended use. While one metric, AUC, is designed to measure model behavior across the full range of possible uses, in practice risk models are only ever used to flag a minority patient population, and so the statistic is not fully relevant. Metrics like precision and recall require a yes/no intervention threshold before they can even be computed, something that we lack as this model is not slated for a specific clinical program. For simplification, we assumed the model would be used in an intervention on the 25% of patients with the highest predicted risk. We chose 25% because this is the fraction of patients that LACE naturally flags as high-risk, so we conservatively compare to LACE on its best terms. Additionally, we wanted to understand the predictive power of each set of features. To achieve that, we removed individual feature sets, one at a time, and compared the performance (in terms of AUC) with the best performing model. Providers often want to focus their interventions on a specific patient population based on their age, geography or medical condition. Therefore, it is important to measure how well the model performs in each of those subpopulations. In addition, so far, CMS has penalized hospitals for excessive readmission of patients with heart failure (HF), chronic obstructive pulmonary disease (COPD), acute myocardial infarction (AMI), or pneumonia5. We compared the performance of our model against LACE in each of those subpopulations.

Cost savings analysis

The main objective of this research study is to build and pilot a predictive model to accurately identify high-risk patients, and support the implementation of valuable and cost-effective post-discharge interventions. Therefore, a cost-saving analysis could assist decision makers to effectively plan and optimize hospital resources. The optimal intervention threshold for maximizing cost savings depends on (1) the average cost of a readmission, (2) the expected cost of intervention(s), and (3) the expected effectiveness of intervention(s). Then, we can calculate the expected savings from each given intervention strategy as follows:

Results

Table 4 compares the performance (assuming a 25% intervention rate) of our models and that of LACE when run on all data with 5-fold validation, using the metrics of precision (PPV), recall (sensitivity), and AUC (c-statistic).
Table 4

Comparison of the performance of our models with that of LACE, assuming a 25% intervention rate.

Model*# FeaturesPrecisionRecallAUCTraining time**Evaluation time**
2-layer neural network166724%60%0.782650 sec154 sec
2-layer neural network50022%61%0.7739631
2-layer neural network10022%58%0.7616914
Random forest10023%57%0.7766943
Logistic regression166717%41%0.66604
Logistic regression10021%52%0.72170.1
LACE421%49%0.72***00.2

*—Model parameters: neural network (as described in Methods section), random forest (1000 trees of max depth 8, with 30% of features in each tree), logistic regression (default parameters in scikit-learn package)

**—Per-fold training time was measured on a 2014 Macbook Pro with a 4-core 2.2 GHz processor and 16GB RAM. The neural network model ran on four cores, while the other models could only be run on a single core. Training was performed on 259,050 records and evaluation was performed on 64,763 records.

***—We computed the AUC for LACE by comparing the performance of LACE models at every possible threshold. However, LACE is normally used with a fixed threshold, so the given AUC overstates the performance of LACE in practice.

*—Model parameters: neural network (as described in Methods section), random forest (1000 trees of max depth 8, with 30% of features in each tree), logistic regression (default parameters in scikit-learn package) **—Per-fold training time was measured on a 2014 Macbook Pro with a 4-core 2.2 GHz processor and 16GB RAM. The neural network model ran on four cores, while the other models could only be run on a single core. Training was performed on 259,050 records and evaluation was performed on 64,763 records. ***—We computed the AUC for LACE by comparing the performance of LACE models at every possible threshold. However, LACE is normally used with a fixed threshold, so the given AUC overstates the performance of LACE in practice. Any model trained on present data will always perform slightly worse on future data, as the world changes and the model’s assumptions become less accurate. To evaluate performance on future data, we trained our best-performing model, the two-layer neural network, on all patients’ data with a hospitalization event prior to 2015, and measured the performance of the model in predicting 30-day readmissions in 2015. As seen in Table 5, a slight performance reduction in precision (from 24% to 23%), relative to the model’s performance on all data, is observed.
Table 5

Performance of our model versus LACE on 2015 data when trained on data through 2014.

Model*# FeaturesPrecisionRecallAUCTraining time**
2-layer neural networkall23%59%0.781040 sec
LACE419%50%0.71***0

*—Model parameters: neural network (as described in Methods section), random forest (1000 trees of max depth 8, with 30% of features in each tree), logistic regression (default parameters in scikit-learn package)

**—Per-fold training time was measured on a 2014 Macbook Pro with a 4-core 2.2 GHz processor and 16GB RAM. The neural network model ran on four cores, while the other models could only be run on a single core. Training was performed on 259,050 records and evaluation was performed on 64,763 records.

***—We computed the AUC for LACE by comparing the performance of LACE models at every possible threshold. However, LACE is normally used with a fixed threshold, so the given AUC overstates the performance of LACE in practice.

*—Model parameters: neural network (as described in Methods section), random forest (1000 trees of max depth 8, with 30% of features in each tree), logistic regression (default parameters in scikit-learn package) **—Per-fold training time was measured on a 2014 Macbook Pro with a 4-core 2.2 GHz processor and 16GB RAM. The neural network model ran on four cores, while the other models could only be run on a single core. Training was performed on 259,050 records and evaluation was performed on 64,763 records. ***—We computed the AUC for LACE by comparing the performance of LACE models at every possible threshold. However, LACE is normally used with a fixed threshold, so the given AUC overstates the performance of LACE in practice. Fig 5 compares our model with LACE in four different age brackets. From this graph, the discriminatory power of the model decreases in older patients. However, it still outperforms LACE (+0.02 precision, +0.11 recall). Fig 6 compares the performance of the model in the top five Sutter Health hospitals by number of inpatient records. As seen in this graph, performance varies depending on the hospital location and the population it serves. Lastly, Fig 7 compares our model’s performance among subgroups with varying medical conditions. While the result suggests that the model performs slightly worse in those conditions, it is still superior to LACE (+ 0.03–0.05 precision, + 0.02–0.12 recall).
Fig 5

Comparison of artificial neural network model with LACE in 4 different age brackets.

Fig 6

Comparison of the model performance among top five Sutter Health hospitals by the number of inpatient records.

Fig 7

Comparison of the neural network model’s performance among subgroups with varying medical conditions.

Due to the nonlinear relationship of different feature sets, it is virtually impossible to calculate the absolute contribution of individual feature sets on the model. However, we can approximate their effect by measuring the model performance using all feature sets except one. The result of this experiment is shown in Table 6. As seen in this table, removing any single feature set, except Medications, Utilization or Vitals, does not have a significant effect on the model performance.
Table 6

Comparison of performance of each feature group on the neural network model, tested by withholding one feature group at a time and measuring the impact on model AUC.

Feature GroupEffect on AUC
Medications+ 0.010
Utilization+ 0.007
Vitals+ 0.006
Lab Results+ 0.003
Discharge+ 0.003
Hospital Problems+ 0.002
Provider+ 0.001
Comorbidities+ 0.001
Basic Demographics+ 0.001
Payer+ 0.000
Health History+ 0.000
Admission+ 0.000
Encounter Reason– 0.001
Socioeconomic– 0.002
Procedures– 0.005
For the cost savings analysis, while the actual values may be difficult (or, in some cases, even impossible) to predict, we will use the following values as an example: Readmission cost: $5000, Intervention Cost: $250, Intervention success rate: 20%. Fig 8 shows the projected saving values as a function of the intervention rate (percentage of patients subjected to readmission-prevention interventions).
Fig 8

The projected saving values as a function of the intervention rate, with the example parameters given for the cost-savings analysis in the results section.

Discussion

The factors behind hospital readmission are numerous, complex and interdependent. Although some factors, such as prior utilization, comorbidities, and age, are very predictive by themselves, improving the predictive power beyond LACE requires models that capture the interdependencies and non-linearity of those factors more efficiently. Artificial neural networks (ANN), by modeling nonlinear interactions between factors, provide an opportunity to capture those complexities. This nonlinear nature of ANNs enables us to harness more predicitive power from the additional extracted EHR data fields beyond LACE’s four parameters. Furthermore, neural networks are compact and can be incrementally retrained on new data to avoid the “model drift” that occurs when a model trained on data too far back in the past performs progressively worse on future data that follows a different pattern. The TensorFlow framework provides several added benefits for training a readmission model. First, TensorFlow can run in a variety of environments, whether on CPUs, GPUs, or distributed clusters. This means that the same kind of model can be trained in a variety of different hospital IT architectures, and achieve optimal performance in each. Secondly, with the aid of high-level interfaces, such as keras, TensorFlow can model neural network architectures in a very natural way. This enabled us to quickly experiment with different neural network setups to find the ideal configuration for the problem. Finally, TensorFlow is an actively maintained open-source project, and its performance improves continually through contributions from the open-source machine-learning community. A fair comparison of our model with results in existing literature is not feasible, because the performance of readmission risk models varies tremendously between different patient populations, and no previous readmission prediction work has been done on the Sutter Health patient population. Even the LACE model’s performance varies in the literature from 0.596 AUC [10] to 0.684 AUC [11], which illustrates the impact of patient population on the accuracy of readmission prediction. The performance of our model (as measured by precision, recall, and AUC) within patient subgroups tends to be worse than the performance of the same model within the whole patient population. Some of this performance drop can be explained by the fact that each subgroup represents a reduced feature set to our model—for example, age is no longer as predictive a feature to when every patient in a subgroup has a similar age. Furthermore, our model tends to perform on subgroups that LACE also has the worst performance on, such as patients aged 85+ (Fig 5) or patients with heart failure (Fig 7), suggesting that certain patient subpopulations have significantly less predictable readmission patterns than the general patient population. We used two sources of SDoH features: health history questions (regarding tobacco, alcohol, and drug use) and block-level census data based on patient address. The health history features had some predictive value, two of them (“no alcohol use” and “quit smoking”) being in the top 100 features most linearly correlated with readmission risk. However, the census features were less predictive, with no features in the top 100 and only a few in the top 200 (such as poverty rate and household income). Both feature sources suffered from drawbacks: the health surveys were both brief and incomplete for ~25% of patients, while the block-level census data only provided information about a patient’s neighborhood but not about the patient themselves. For SDoH features to provide significant predictive value, they would have to be both comprehensive and individualized. Since this study was conducted on EHR data from Sutter Health network of hospitals in California, it does not capture potential out-of-network hospital readmissions. To address this limitation, the dataset could be supplemented by state or national index hospital admissions to build a more comprehensive dataset.

Conclusions

In this study, we successfully trained and tested a neural network model to predict the risk of patients’ rehospitalization within 30 days of their discharge. This model has several advantages over LACE, the current industry standard, and other proposed models in the literature including (1) significantly better performance in predicting the readmission risk, (2) being based on real-time data from EHR, and thus applicable at the time discharge from hospital, and (3) being compact and immune to model drift. Furthermore, to determine the classifier’s labeling threshold, we suggested a simple cost-saving optimization analysis. Further research is required to study the effect of more granular and structured social determinants of health data on the model’s predictive power. Some studies [23] have shown that natural language processing (NLP) techniques could be used to extract SDoH data from patient’s case notes. However, the most systematic method is to gather such data from SDoH screeners. Currently, multiple initiatives [24] are underway to standardize SDoH screeners, and integrate them into EHR systems. The importance of reducing hospital readmissions, and therefore risk assessment, is likely to only grow in importance in the years to come. We believe that predictive analytics in general, and modern machine-learning techniques in particular, are powerful tools that have to be fully exploited in this field.

Software release

The neural network model described in the paper, as well as the code to run it on EMR data, is available (under the Apache license) at https://github.com/bayesimpact/readmission-risk.
  10 in total

1.  An automated model to identify heart failure patients at risk for 30-day readmission or death using electronic medical record data.

Authors:  Ruben Amarasingham; Billy J Moore; Ying P Tabak; Mark H Drazner; Christopher A Clark; Song Zhang; W Gary Reed; Timothy S Swanson; Ying Ma; Ethan A Halm
Journal:  Med Care       Date:  2010-11       Impact factor: 2.983

Review 2.  Risk prediction models for hospital readmission: a systematic review.

Authors:  Devan Kansagara; Honora Englander; Amanda Salanitro; David Kagen; Cecelia Theobald; Michele Freeman; Sunil Kripalani
Journal:  JAMA       Date:  2011-10-19       Impact factor: 56.272

3.  Introduction: CDC Health Disparities and Inequalities Report - United States, 2013.

Authors:  Pamela A Meyer; Paula W Yoon; Rachel B Kaufmann
Journal:  MMWR Suppl       Date:  2013-11-22

4.  A comparison of models for predicting early hospital readmissions.

Authors:  Joseph Futoma; Jonathan Morris; Joseph Lucas
Journal:  J Biomed Inform       Date:  2015-06-01       Impact factor: 6.317

5.  A predictive analytics approach to reducing 30-day avoidable readmissions among patients with heart failure, acute myocardial infarction, pneumonia, or COPD.

Authors:  Issac Shams; Saeede Ajorlou; Kai Yang
Journal:  Health Care Manag Sci       Date:  2014-05-03

6.  Mining high-dimensional administrative claims data to predict early hospital readmissions.

Authors:  Danning He; Simon C Mathews; Anthony N Kalloo; Susan Hutfless
Journal:  J Am Med Inform Assoc       Date:  2013-09-27       Impact factor: 4.497

7.  Derivation and validation of an index to predict early death or unplanned readmission after discharge from hospital to the community.

Authors:  Carl van Walraven; Irfan A Dhalla; Chaim Bell; Edward Etchells; Ian G Stiell; Kelly Zarnke; Peter C Austin; Alan J Forster
Journal:  CMAJ       Date:  2010-03-01       Impact factor: 8.262

8.  Hospital readmissions reduction program.

Authors:  Colleen K McIlvennan; Zubin J Eapen; Larry A Allen
Journal:  Circulation       Date:  2015-05-19       Impact factor: 29.690

9.  Data-driven decisions for reducing readmissions for heart failure: general methodology and case study.

Authors:  Mohsen Bayati; Mark Braverman; Michael Gillam; Karen M Mack; George Ruiz; Mark S Smith; Eric Horvitz
Journal:  PLoS One       Date:  2014-10-08       Impact factor: 3.240

10.  Development, Validation and Deployment of a Real Time 30 Day Hospital Readmission Risk Assessment Tool in the Maine Healthcare Information Exchange.

Authors:  Shiying Hao; Yue Wang; Bo Jin; Andrew Young Shin; Chunqing Zhu; Min Huang; Le Zheng; Jin Luo; Zhongkai Hu; Changlin Fu; Dorothy Dai; Yicheng Wang; Devore S Culver; Shaun T Alfreds; Todd Rogow; Frank Stearns; Karl G Sylvester; Eric Widen; Xuefeng B Ling
Journal:  PLoS One       Date:  2015-10-08       Impact factor: 3.240

  10 in total
  31 in total

1.  Social determinants of health in electronic health records and their impact on analysis and risk prediction: A systematic review.

Authors:  Min Chen; Xuan Tan; Rema Padman
Journal:  J Am Med Inform Assoc       Date:  2020-11-01       Impact factor: 4.497

2.  Psychosocial information use for clinical decisions in diabetes care.

Authors:  Charles Senteio; Julia Adler-Milstein; Caroline Richardson; Tiffany Veinot
Journal:  J Am Med Inform Assoc       Date:  2019-08-01       Impact factor: 4.497

3.  Leveraging Data and Digital Health Technologies to Assess and Impact Social Determinants of Health (SDoH): a State-of-the-Art Literature Review.

Authors:  Kelly J Thomas Craig; Nicole Fusco; Thrudur Gunnarsdottir; Luc Chamberland; Jane L Snowdon; William J Kassler
Journal:  Online J Public Health Inform       Date:  2021-12-24

4.  Comparison of Back-Propagation Neural Network, LACE Index and HOSPITAL Score in Predicting All-Cause Risk of 30-Day Readmission.

Authors:  Chaohsin Lin; Shuofen Hsu; Hsiao-Feng Lu; Li-Fei Pan; Yu-Hua Yan
Journal:  Risk Manag Healthc Policy       Date:  2021-09-14

5.  Predicting Unplanned 7-day Intensive Care Unit Readmissions with Machine Learning Models for Improved Discharge Risk Assessment.

Authors:  Katherine Shi; Vy Ho; Joanna J Song; Katelyn Bechler; Jonathan H Chen
Journal:  AMIA Annu Symp Proc       Date:  2022-05-23

6.  Implementation of Artificial Intelligence-Based Clinical Decision Support to Reduce Hospital Readmissions at a Regional Hospital.

Authors:  Santiago Romero-Brufau; Kirk D Wyatt; Patricia Boyum; Mindy Mickelson; Matthew Moore; Cheristi Cognetta-Rieke
Journal:  Appl Clin Inform       Date:  2020-09-02       Impact factor: 2.342

7.  Use of deep learning to develop continuous-risk models for adverse event prediction from electronic health records.

Authors:  Christopher Nielson; Martin G Seneviratne; Joseph R Ledsam; Shakir Mohamed; Nenad Tomašev; Natalie Harris; Sebastien Baur; Anne Mottram; Xavier Glorot; Jack W Rae; Michal Zielinski; Harry Askham; Andre Saraiva; Valerio Magliulo; Clemens Meyer; Suman Ravuri; Ivan Protsyuk; Alistair Connell; Cían O Hughes; Alan Karthikesalingam; Julien Cornebise; Hugh Montgomery; Geraint Rees; Chris Laing; Clifton R Baker; Thomas F Osborne; Ruth Reeves; Demis Hassabis; Dominic King; Mustafa Suleyman; Trevor Back
Journal:  Nat Protoc       Date:  2021-05-05       Impact factor: 13.491

Review 8.  Application of machine learning in predicting hospital readmissions: a scoping review of the literature.

Authors:  Yinan Huang; Ashna Talwar; Satabdi Chatterjee; Rajender R Aparasu
Journal:  BMC Med Res Methodol       Date:  2021-05-06       Impact factor: 4.615

9.  Application of Data Mining Algorithms for Dementia in People with HIV/AIDS.

Authors:  Luana Ibiapina Cordeiro Calíope Pinheiro; Maria Lúcia Duarte Pereira; Marcial Porto Fernandez; Francisco Mardônio Vieira Filho; Wilson Jorge Correia Pinto de Abreu; Pedro Gabriel Calíope Dantas Pinheiro
Journal:  Comput Math Methods Med       Date:  2021-07-09       Impact factor: 2.238

Review 10.  A scoping review on the use of machine learning in research on social determinants of health: Trends and research prospects.

Authors:  Shiho Kino; Yu-Tien Hsu; Koichiro Shiba; Yung-Shin Chien; Carol Mita; Ichiro Kawachi; Adel Daoud
Journal:  SSM Popul Health       Date:  2021-06-05
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.