Literature DB >> 35950158

Using Machine Learning Techniques to Predict Hospital Admission at the Emergency Department.

Georgios Feretzakis1,2, George Karlis3, Evangelos Loupelis1, Dimitris Kalles2, Rea Chatzikyriakou1, Nikolaos Trakas1, Eugenia Karakou1, Aikaterini Sakagianni1, Lazaros Tzelves1, Stavroula Petropoulou1, Aikaterini Tika1, Ilias Dalainas1, Vasileios Kaldis1.   

Abstract

Introduction: One of the most important tasks in the Emergency Department (ED) is to promptly identify the patients who will benefit from hospital admission. Machine Learning (ML) techniques show promise as diagnostic aids in healthcare. Aim of the study: Our objective was to find an algorithm using ML techniques to assist clinical decision-making in the emergency setting. Material and methods: We assessed the following features seeking to investigate their performance in predicting hospital admission: serum levels of Urea, Creatinine, Lactate Dehydrogenase, Creatine Kinase, C-Reactive Protein, Complete Blood Count with differential, Activated Partial Thromboplastin Time, DDi-mer, International Normalized Ratio, age, gender, triage disposition to ED unit and ambulance utilization. A total of 3,204 ED visits were analyzed.
Results: The proposed algorithms generated models which demonstrated acceptable performance in predicting hospital admission of ED patients. The range of F-measure and ROC Area values of all eight evaluated algorithms were [0.679-0.708] and [0.734-0.774], respectively. The main advantages of this tool include easy access, availability, yes/no result, and low cost. The clinical implications of our approach might facilitate a shift from traditional clinical decision-making to a more sophisticated model. Conclusions: Developing robust prognostic models with the utilization of common biomarkers is a project that might shape the future of emergency medicine. Our findings warrant confirmation with implementation in pragmatic ED trials.
© 2022 Georgios Feretzakis, George Karlis, Evangelos Loupelis, Dimitris Kalles, Rea Chatzikyriakou, Nikolaos Trakas, Eugenia Karakou, Aikaterini Sakagianni, Lazaros Tzelves, Stavroula Petropoulou, Aikaterini Tika, Ilias Dalainas, Vasileios Kaldis, published by Sciendo.

Entities:  

Keywords:  artificial intelligence; biomarkers; emergency department; emergency medicine; machine learning techniques

Year:  2022        PMID: 35950158      PMCID: PMC9097643          DOI: 10.2478/jccm-2022-0003

Source DB:  PubMed          Journal:  J Crit Care Med (Targu Mures)        ISSN: 2393-1817


Introduction

The Emergency Department (ED) represents a key element of any given healthcare facility and retains a high public profile. ED staff manage patients with a huge variety of medical problems and deal with all sorts of emergencies. ED congestion resulting in delays in care remains a frequent issue that prompts the development of tools for rapid triage of high-risk patients [1]. Moreover, it is well documented that timely interventions are critical for several acute diseases [2, 3]. One of the most commonly encountered ED priorities is to quickly identify those who will need hospital admission. Traditionally, this decision relies on clinical judgment aided by the results of laboratory tests. Human factors leading to diagnostic errors occur frequently and are associated with increased morbidity and mortality [4]. Machine Learning (ML) techniques show promise as diagnostic aids in healthcare and have sparked the discussion for their wider application in the ED [5]. Developing robust prognostic models with the utilization of common biomarkers to facilitate rapid and reliable decision-making regarding hospital admission of ED patients is a project that might shape the future of emergency medicine. However, relevant data from the ED is scarce. Recent studies have focused on clinical outcome and mortality prediction [6, 7]. We assessed biochemical markers and coagulation tests that are routinely checked in patients visiting the ED, seeking to investigate their performance in predicting whether the patients will be admitted to the hospital. Our aim is to find an algorithm using ML techniques to assist clinical decision-making in the emergency setting.

Materials and methods

This research is a retrospective observational study conducted in the ED of a public tertiary care hospital in Greece that has been approved by the Institutional Review Board of Sismanogleio General Hospital (Ref. No 15177/2020, 5969/2021). This study examines the performance of eight machine learning models based on data of the Biochemistry and Hematology Departments from ED patients. Blood samples were obtained for the measurement of biochemical and hematological parameters. The serum levels of Urea (UREA) [Normal Range (NR)=10-50 mg/dL-test principle: kinetic test with urease and glutamate dehydrogenase], Creatinine (CREA) (NR=0.5-1.5 mg/dL-kinetic colorimetric assay based on the Jaffé method), Lactate Dehydrogenase (LDH) (NR=135-225 U/L-UV assay), Creatine Kinase (CPK) (NR=25-190 U/L-UV assay), C-Reactive Protein (CRP) (NR < 6 mg/L-particle‑enhanced immunoturbidimetric assay) were measured using the Cobas 6000 c501 Analyzer (Roche Diagnostics, Mannheim, Germany). Complete blood count (CBC) samples were collected, and parameters such as White Blood Cell (WBC) (NR=4-11 K/μl-flow cytometry analysis), Neutrophil (NEUT) (NR=40-75 %-flow cytometry), Lymphocyte (LYM) (NR=20-40%-flow cytometry) and Platelet (PLT) (NR=150-400 K/μl-hydrodynamic focusing-flow cytometry) counts and Hemoglobin (HGB) (NR=12-17.5 g/dL-SLS method) were analyzed using the Sysmex XE 2100 Automated Hematology Analyzer (Sysmex Corporation, Kobe, Japan). Routine hemostasis parameters such as activated partial thromboplastin time (aPTT) (NR=24-39 sec-clotting method), DDimer (DD) (NR <500 μg/L-immunoturbidimetric assay), and International Normalized Ratio (INR) (NR=0.86-1.20-calculated) were determined in plasma using the BCS XP Automated Hemostasis Analyzer (Siemens Healthcare Diagnostics, Marburg, Germany). All raw data was retrieved from a standard Hospital Information System (HIS) and a Laboratory Information System (LIS). The analysis was performed using the Waikato Environment for Knowledge Analysis (WEKA) [8], a Data Mining Software in Java workbench. The flow diagram of the study is depicted in Figure 1. A total of 3,204 ED visits were analyzed during the study period (14 March – 4 May 2019). The anonymous data set under investigation contains eighteen features presented in Table 1.
Fig. 1

Patient flow diagram

Table 1

Features

FeaturesTypeMeanStandard Deviation
CPKnumerical179.1551183.877
CREAnumerical1.060.827
CRPnumerical39.09471.48
LDHnumerical222.327156.343
UREAnumerical45.65133.616
aPTTnumerical34.22711.443
DDIMERnumerical1422.8992522.921
INRnumerical1.1310.571
HGBnumerical12.872.13
LYMnumerical22.08511.672
NEUTnumerical69.47813.083
PLTnumerical252.46787.814
WBCnumerical9.6175.153
Agenumerical; Integer*61.17520.822
Gendercategorical {Male, Female}
ED Unitcategorical {Urology, Pulmonology, Internal Medicine, Otolaryngology, Triage, Cardiology, General Surgery, Opthalmology, Vascular Surgery, Thoracic Surgery}
AmbulanceCategorical {Yes, No}
AdmissionCategorical {Yes, No}

*Patients’ age has been rounded to the nearest whole number

Patient flow diagram Features *Patients’ age has been rounded to the nearest whole number To assess the performance of the best-performing model (Smith and Frank 2016) for our analysis in WEKA, we have used a 10-fold cross-validation approach to avoid overfitting; Cross-validation is widely regarded as a quite reliable way to assess the quality of results from machine learning techniques. WEKA [9, 10, 11] provides detailed results for the classifiers under investigation regarding the following evaluation measures: a. TP Rate (or Recall) is calculated as b. FP Rate is calculated as c. Precision is calculated as d. F-Measure is calculated as e. MMC is calculated as f. The area under the Receiver Operating Characteristics (ROC) curve (AUC) g. The PRC plot shows the relationship between precision and sensitivity. Among many algorithms that were evaluated for our research purposes, in this article, we present only the eight best-performing algorithms, mainly in terms of ROC area and F-Measure. During our experiments, we retained the default settings of all classification algorithms’ original implementations provided by WEKA. Each algorithm was evaluated on two data sets; the original data set, including the missing values, and on the data set where the missing values were identified, and they were replaced with appropriate values using WEKA’s ReplaceMissing-Values filter. Furthermore, since the number of patients in our data set who met clinical criteria for hospital admission (36.7%) is less than those who did not meet (63.3%), we applied WEKA’s ClassBalancer technique [8] to prevent overfitting by reweighting the instances in the data set so that each class had the same total weight during the phase of model training. In our investigation, we evaluated a Naive Bayes classifier [12, 13], a multinomial logistic regression model with a ridge estimator [14], two boosting techniques; AdaBoost [15] and LogitBoost [16], Classification via Regression [17], a random forest [18], a bagging method [19] and a multilayer perceptron (MLP) (a neural network trained with error backpropagation) [8, 20].

Results

The performance of each algorithm was evaluated on its ability to predict whether a patient seen in the emergency department is subsequently admitted to the hospital or not by only taking into consideration the features presented in Table 1. All algorithms were evaluated on both datasets (original with missing values, modified by using the ReplaceMissingValues filter), and the detailed results are presented in Appendix (Tables A1-A16). The classification performance’s results on the original data set, regarding the F-Measure and ROC Area of each algorithm, are summarized in Table 2 and Figure 2.
Table 2

Weighted Average values of F-Measure and ROC Area for all methods (10-fold cross-validation)

F-MeasureROC Area
NaiveBayes0.6790.734
Logistic Regression0.6970.762
Ada boost0.6850.753
Logit boost0.7080.774
ClassificationViaRegression0.6910.760
Random Forest0.6890.757
Bagging0.7030.764
Multilayer perceptron0.7070.742
Fig. 2

Weighted Average values of F-Measure and ROC Area for all methods (10-fold cross-validation)

Weighted Average values of F-Measure and ROC Area for all methods (10-fold cross-validation) Weighted Average values of F-Measure and ROC Area for all methods (10-fold cross-validation) According to Table 2, considering the weighted average values, it can be seen that Logit boost slightly outperformed other models with respect to both F-measure and ROC Area with values of 0.708 and 0.774, respectively. We can also observe that the range of F-measure and ROC Area values of all eight algorithms that were evaluated are [0.679-0.708] and [0.734-0.774], respectively, and they can be considered acceptable [21]. The classification performance’s results of F-Measure and ROC Area on the data set where the missing values have been replaced by using WEKA’s ReplaceMissing-Values filter are summarized in Table 3 and Figure 3.
Table 3

Weighted Average values of F-Measure and ROC Area for all methods -ReplaceMissingValues filters (10-fold cross-validation)

F-MeasureROC Area
NaiveBayes0.6630.741
Logistic Regression0.6960.765
Ada boost0.6740.731
Logit boost0.7040.757
ClassificationViaRegression0.6910.758
Random Forest0.7230.789
Bagging0.7120.775
Multilayer perceptron0.6970.740
Fig. 3

Weighted Average values of F-Measure and ROC Area for all methods - ReplaceMissingValues filters (10-fold cross-validation)

Weighted Average values of F-Measure and ROC Area for all methods - ReplaceMissingValues filters (10-fold cross-validation) Weighted Average values of F-Measure and ROC Area for all methods -ReplaceMissingValues filters (10-fold cross-validation) According to Table 3, considering the weighted average values, it can be seen that the Random Forest slightly outperformed other models with respect to both F-measure and ROC Area with values of 0.723 and 0.789, respectively. Additionally, we can observe that the range of F-measure and ROC Area values of all eight algorithms are [0.663-0.723] and [0.731-0.789], respectively, and as previously noted, they can also be considered acceptable. Furthermore, we were positively surprised to see that the impact of missing values on the classifiers’ performance was less pronounced than we initially thought. Furthermore, since the admitted patients were 1175 versus 2029 that were not admitted, we applied the WEKA’s ClassBalancer technique on both datasets and re-evaluate the performance of the two classifiers (Logit boost and Random Forest). After the application of the ClassBalancer filter in the original data set, we observe that the performance of Logit boost (F-measure:0.693; ROC area:0.773) (Table A17) is quite similar to this of the imbalanced data set (F-measure:0.708; ROC area:0.774). Similar behavior, we also observe in the performance of Random Forest before (F-measure:0.723; ROC area:0.789) and after (F-measure:0.704; ROC area:0.784) (Table A18) the application of Class-Balancer filter in the data set where the missing values have been replaced by using the ReplaceMissingValues filter.

Discussion

Based on the data from 3,204 adult ED visits, using common laboratory tests and basic demographics, we evaluated eight ML algorithms that generated models that can reliably predict the hospital admission of patients seen in the ED. Our study utilized pre-existing patient data from a standard HIS and LIS. Therefore, the methods proposed here can serve as a valuable tool for the clinician to decide whether to admit or not an ED patient. The main advantages of this tool include easy access, availability, yes/no result, and low cost. The clinical implications of our approach might be significant and might facilitate a shift from traditional clinical decision-making to a more sophisticated model. The application of machine learning techniques in the ED is not entirely new. Yet, it is not considered the standard of care. Current efforts are aiming to develop and integrate clinical decision support systems able to provide objective criteria to healthcare professionals. Our study is consistent with previous research showing that logistic regression is the most frequently used technique for model design. The area under the receiver operating curve (AUC) is the most frequently used performance measure [22]. Moreover, the major goal of such predictive tools is to identify high-risk patients accurately and differentiate them from stable, low-risk patients that can be safely discharged from the ED [23] and communicate this identification to the medical expert who can take this information into account while making a decision on admission or discharge. The hectic pace of work and the stressful setting of the ED have negative consequences on patient safety [24, 25]. It is well established that human factors play an important role in the efficiency of healthcare systems. Different error types have different underlying mechanisms and require specific methods of risk management [26]. A fearful shortcoming for the emergency physician is to fail to admit a seriously ill patient. Our methods might be useful to reduce these errors while explicitly acknowledging that they are meant to aid and not substitute clinical judgment. In summary, we present an inexpensive clinical decision support tool derived from readily available patient data. This tool is intended to aid the emergency physician regarding hospital admission decisions, as the development of machine learning models represents a rapidly evolving field in healthcare.

Limitations

This study is not without limitations. In our analysis, we did not include clinical parameters such as the vital signs and the Emergency Severity Index (ESI) [27]. We aimed to investigate whether our model can identify hospital admissions without taking into account clinical data. Thus, we included limited input variables in order to present a low-cost decision support tool with the minimum available data from our HIS. Τhere were also missing values in the data we collected and analyzed; for example, not all of the analyzed ED visits had all the laboratory investigations available. Furthermore, our preliminary findings have not yet been followed up by an implementation phase, and the proposed algorithms have not been validated in a pragmatic ED trial. Therefore, future research is warranted in order to demonstrate whether they can actually improve care.

Conclusions

In this study, we evaluated a collection of very popular ML classifiers on data from an ED. The proposed algorithms generated models which demonstrated acceptable performance in predicting hospital admission of ED patients based on common biochemical markers, coagulation tests, basic demographics, ambulance utilization, and triage disposition to the ED unit. Our research confirms the prevalent current notion that the utilization of artificial intelligence may have a favorable impact on the future of emergency medicine.
Table A1

Performance results by class of NaiveBayes (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.3600.0910.6960.3600.4740.3300.7340.602Yes
0.9090.6400.7100.9090.7970.3300.7340.811No
Weighted Avg.0.7080.4390.7050.7080.6790.3300. 7340.734
Table A2

Performance results by class of NaiveBayes –ReplaceMissingValues filters (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.3290.0910.6770.3290.4420.3000.7410.603Yes
0.9090.6710.7000.9090.7910.3000.7410.822No
Weighted Avg.0.6960.4580.6920.6960.6630.3000.7410.742
Table A3

Performance results by class of Logistic Regression (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.4560.1420.6500.4560.5360.3460.7620.641Yes
0.8580.5440.7320.8580.7900.3460.7620.838No
Weighted Avg.0.7110.3960.7020.7110.6970.3460.7620.766
Table A4

Performance results by class of Logistic Regression–ReplaceMissingValues filters (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.4560.1430.6490.4560.5360.3450.7650.643Yes
0.8570.5440.7310.8570.7890.3450.7650.841No
Weighted Avg.0.7100.3970.7010.7100.6960.3450.7650.768
Table A5

Performance results by class of AdaBoost (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmision
0.4230.1370.6420.4230.5100.3230.7530.620Yes
0.8630.5770.7210.8630.7860.3230.7530.836No
Weighted Avg.0.7020.4150.6920.7020.6850.3230.7530.757
Table A6

Performance results by class of AdaBoost–ReplaceMissingValues filters (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmision
0.3960.1330.6340.3960.4870.3020.7310.604Yes
0.8670.6040.7130.8670.7820.3020.7310.815No
Weighted Avg.0.6940.4310.6840.6940.6740.3020.7310.738
Table A7

Performance results by class of LogitBoost (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.4890.1470.6580.4890.5610.3700.7740.657Yes
0.8530.5110.7420.8530.7940.3700.7740.854No
Weighted Avg.0.7190.3770.7110.7190.7080.3700.7740.782
Table A8

Performance results by class of LogitBoost –ReplaceMissingValues filters (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.4640.1350.6660.4640.5470.3640.7570.641Yes
0.8650.5360.7360.8650.7950.3640.7570.837No
Weighted Avg.0.7180.3890.7100.7180.7040.3640.7570.765
Table A9

Performance results by class of ClassificationViaRegression (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.4470.1450.6410.4470.5270.3340.7600.639Yes
0.8550.5530.7270.8550.7860.3340.7600.839No
Weighted Avg.0.7050.4030.6960.7050.6910.3340.7600.766
Table A10

Performance results by class of ClassificationViaRegression–ReplaceMissingValues filters (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.4470.1450.6410.4470.5270.3340.7580.638Yes
0.8550.5530.7270.8550.7860.3340.7580.837No
Weighted Avg.0.7050.4030.6960.7050.6910.3340.7580.764
Table A11

Performance results by class of Random Forest (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.3940.1030.6890.3940.5010.3450.7570.650Yes
0.8970.6060.7190.8970.7980.3450.7570.832No
Weighted Avg.0.7130.4220.7080.7130.6890.3450.7570.765
Table A12

Performance results by class of Random Forest–ReplaceMissingValues filters (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.5400.1610.6600.5400.5940.3990.7890.676Yes
0.8390.4600.7590.8390.7970.3990.7890.858No
Weighted Avg.0.7290.3500.7230.7290.7230.3990.7890.791
Table A13

Performance results by class of Bagging (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.4710.1420.6570.4710.5490.3600.7640.654Yes
0.8580.5290.7370.8580.7930.3600.7640.840No
Weighted Avg.0.7160.3870.7080.7160.7030.3600.7640.772
Table A14

Performance results by class of Bagging–ReplaceMissingValues filters (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.5150.1610.6490.5150.5740.3750.7750.654Yes
0.8390.4850.7490.8390.7910.3750.7750.852No
Weighted Avg.0.7200.3660.7120.7200.7120.3750.7750.779
Table A15

Performance results by class of Multilayer perceptron (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.5420.1900.6230.5420.5800.3640.7420.622Yes
0.8100.4580.7530.8100.7810.3640.7420.815No
Weighted Avg.0.7120.3600.7050.7120.7070.3640.7420.744
Table A16

Performance results by class of Multilayer perceptron–ReplaceMissingValues filters (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.4800.1610.6330.4800.5460.3430.7400.617Yes
0.8390.5200.7360.8390.7840.3430.7400.808No
Weighted Avg.0.7070.3880.6980.7070.6970.3430.7400.738
Table A17

Performance results by class of Logit Boost– ClassBalancer filter (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.6850.3000.6960.6850.6900.3850.7730.758Yes
0.7000.3150.6900.7000.6950.3850.7730.779No
Weighted Avg.0.6930.3070.6930.6930.6930.3850.7730.769
Table A18

Performance results by class of Random Forest– ReplaceMissingValues and ClassBalancer filters (10-fold cross-validation)

TP RateFP RatePrecisionRecallF-MeasureMCCROC AreaPRC AreaAdmission
0.6530.2430.7290.6530.6890.4120.7840.767Yes
0.7570.3470.6860.7570.7200.4120.7840.783No
Weighted Avg.0.7050.2950.7070.7050.7040.4120.7840.775
  14 in total

Review 1.  Receiver operating characteristic curve in diagnostic test assessment.

Authors:  Jayawant N Mandrekar
Journal:  J Thorac Oncol       Date:  2010-09       Impact factor: 15.609

Review 2.  Clinical Decision Support Systems for Triage in the Emergency Department using Intelligent Systems: a Review.

Authors:  Marta Fernandes; Susana M Vieira; Francisca Leite; Carlos Palos; Stan Finkelstein; João M C Sousa
Journal:  Artif Intell Med       Date:  2019-11-17       Impact factor: 5.326

3.  Introducing Machine Learning Concepts with WEKA.

Authors:  Tony C Smith; Eibe Frank
Journal:  Methods Mol Biol       Date:  2016

4.  Understanding adverse events: human factors.

Authors:  J Reason
Journal:  Qual Health Care       Date:  1995-06

5.  Effect of emergency department crowding on outcomes of admitted patients.

Authors:  Benjamin C Sun; Renee Y Hsia; Robert E Weiss; David Zingmond; Li-Jung Liang; Weijuan Han; Heather McCreath; Steven M Asch
Journal:  Ann Emerg Med       Date:  2012-12-06       Impact factor: 5.721

6.  Machine-Learning-Based Electronic Triage More Accurately Differentiates Patients With Respect to Clinical Outcomes Compared With the Emergency Severity Index.

Authors:  Scott Levin; Matthew Toerper; Eric Hamrock; Jeremiah S Hinson; Sean Barnes; Heather Gardner; Andrea Dugas; Bob Linton; Tom Kirsch; Gabor Kelen
Journal:  Ann Emerg Med       Date:  2017-09-06       Impact factor: 5.721

7.  Using Machine Learning Techniques to Aid Empirical Antibiotic Therapy Decisions in the Intensive Care Unit of a General Hospital in Greece.

Authors:  Georgios Feretzakis; Evangelos Loupelis; Aikaterini Sakagianni; Dimitris Kalles; Maria Martsoukou; Malvina Lada; Nikoletta Skarmoutsou; Constantinos Christopoulos; Konstantinos Valakis; Aikaterini Velentza; Stavroula Petropoulou; Sophia Michelidou; Konstantinos Alexiou
Journal:  Antibiotics (Basel)       Date:  2020-01-31

8.  Association of door-to-balloon time and mortality in patients admitted to hospital with ST elevation myocardial infarction: national cohort study.

Authors:  Saif S Rathore; Jeptha P Curtis; Jersey Chen; Yongfei Wang; Brahmajee K Nallamothu; Andrew J Epstein; Harlan M Krumholz
Journal:  BMJ       Date:  2009-05-19

Review 9.  Strategies for reducing medication errors in the emergency department.

Authors:  Kyle A Weant; Abby M Bailey; Stephanie N Baker
Journal:  Open Access Emerg Med       Date:  2014-07-23

10.  Emergency department crowding: A systematic review of causes, consequences and solutions.

Authors:  Claire Morley; Maria Unwin; Gregory M Peterson; Jim Stankovich; Leigh Kinsman
Journal:  PLoS One       Date:  2018-08-30       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.