Literature DB >> 35859857

Development of a low-dimensional model to predict admissions from triage at a pediatric emergency department.

Fiona Leonard1, John Gilligan2, Michael J Barrett3,4.   

Abstract

Objectives: This study aims to develop and internally validate a low-dimensional model to predict outcomes (admission or discharge) using commonly entered data up to the post-triage process to improve patient flow in the pediatric emergency department (ED). In hospital settings where electronic data are limited, a low-dimensional model with fewer variables may be easier to implement.
Methods: This prognostic study included ED attendances in 2017 and 2018. The Cross Industry Standard Process for Data Mining methodology was followed. Eligibility criteria was applied to the data set, splitting into 70% train and 30% test. Sampling techniques were compared. Gradient boosting machine (GBM), logistic regression, and naïve Bayes models were created. Variables of importance were obtained from the model with the highest area under the curve (AUC) and used to create a low-dimensional model.
Results: Eligible attendances totaled 72,229 (15% admission rate). The AUC was 0.853 (95% confidence interval [CI], 0.846-0.859) for GBM, 0.845 (95% CI, 0.838-0.852) for logistic regression and 0.813 (95% CI, 0.806-0.821) for naïve Bayes. Important predictors in the GBM model used to create a low-dimensional model were presenting complaint, triage category, referral source, registration month, location type (resuscitation/other), distance traveled, admission history, and weekday (AUC 0.835 [95% CI, 0.829-0.842]). Conclusions: Admission and discharge probability can be predicted early in a pediatric ED using 8 variables. Future work could analyze the false positives and false negatives to gain an understanding of the implementation of these predictions.
© 2022 The Authors. JACEP Open published by Wiley Periodicals LLC on behalf of American College of Emergency Physicians.

Entities:  

Year:  2022        PMID: 35859857      PMCID: PMC9286530          DOI: 10.1002/emp2.12779

Source DB:  PubMed          Journal:  J Am Coll Emerg Physicians Open        ISSN: 2688-1152


INTRODUCTION

Background and importance

Emergency department (ED) overcrowding is a global health issue. Rising incidences of hospital overcrowding have led to an increase in studies attempting to tackle the problem by early prediction of ED admissions using routinely collected data. , , Sinclair proposes the most influential factor contributing to overcrowding in the pediatric ED is the presence of hospital admitted patients (boarders). Inpatient boarders in the ED treatment area increase the wait time for other ED patients and can also cause less acute patients to leave. The implementation of a predictive analytics tool centered on potential admission predictions from the ED and rapidly forecasting patient disposition post triage may alleviate ED overcrowding by directly improving ED patient flow. Early studies on predicting admissions from the ED focused on clinical judgment based prediction , , , and comparing this judgment with machine learning methods. More recent research has suggested a combined approach using both machine learning and clinical judgment would achieve the best results. Other studies centered on specific patient cohorts, , , such as acute bronchiolitis, progressive modeling approaches were used, and others used natural language processing to create predictors. , , Later studies derived scores from logistic regression (LR) models to determine risk of admission. , The implementation of models to predict admissions — discharges from the ED using electronic health record data will depend on the level of information technology maturity specific to the technology, people, and processes within each hospital environment. A low‐dimensional model with fewer variables may be simpler to implement in settings that have limited data recorded in electronic format.

Goals of this investigation

The primary objective of this study is to develop and internally validate a low‐dimensional predictive model from a pediatric ED based on a limited data set of input variables. We define a low‐dimensional model as including as few variables as possible while maintaining a good (comparable to previous studies) discrimination measure (area under the curve [AUC]). A low‐dimensional model will be developed, composed of the most important predictors from the initial models (using all variables in the data set) to further demonstrate what can be achieved in countries without a robust electronic health care record system. This would highlight the important variables to focus on for the data collection process. A model composed of data generated at triage will provide early notice of ED patients requiring hospital admission and those to be fast tracked for discharge. The initial models will be used to compare different sampling strategies and machine learning algorithms, with the best performing model being selected to develop the final low‐dimensional model.

METHODS

Study setting and design

ED data were analyzed from 1 acute tertiary pediatric multiuniversity affiliated teaching hospital in the Republic of Ireland, which also serves as a secondary care facility for the regional pediatric population, with a yearly census of approximately 39,000 visits. This retrospective study was approved by the Hospital Research and Ethics Committee of Children's Health Ireland at Crumlin (formerly Our Lady's Children's Hospital, Crumlin) (GEN/693/18). Overcrowding in pediatric emergency departments is a problem, and early anticipation of admissions and discharges improves patient flow. This study predicted dispositions using readily available data obtained at triage, creating a model for use in healthcare systems with minimal electronic health resources. The data mining methodology, Cross Industry Standard Process for Data Mining (CRISP‐DM) guided the study; consisting of business understanding, data understanding, data preparation, modeling, evaluation, and deployment. All data extraction and transformation were carried out in Microsoft SQL Server Management Studio. Data understanding, model development, and evaluation were performed in R Studio Version 1.1.456. The reporting guidelines for this prognostic study, as set out by Transparent Reporting of a Multivariable Prediction Model for Individual Prognosis or Diagnosis (TRIPOD), were followed. A protocol was developed and published for this study. The performance of models developed using 3 different machine learning algorithms and various sampling strategies were compared. The top variables of importance were selected from the model with the highest AUC and used to create an additional low‐dimensional model, with a greater focus on generalizability.

Data collection and transformation

Attendances to the ED from 2017 to 2018, comprising routinely captured data entered up to post‐triage (refers to the end of the triage process where the patient is assigned to the next physical space [ED location] and the type of clinician who will see the patient [clinician type]) were included. Two years of data were selected to provide a robust representation of variables and incorporate seasonal changes. Based on previous studies we excluded patients over 18 years of age, visits with a did not wait outcome (includes left before being seen and left before completion of treatment) as their outcome was non‐deterministic should the patient have stayed, , , , , , , and patients with missing data (sex, triage category, and health care record number). As outlined in our protocol missing data were analyzed to assess the most appropriate method to address; listwise deletion was selected due to the small percentage and the assumption that data was missing at random. Patients returning for specimen collection or day case management were also excluded as their inclusion could be a potential source of bias. The data set creation, outcome variable (admission or discharge), and initial variable selection are outlined in our protocol and consisted of 34 variables extracted from 3 separate databases. The outcome of “admission” included attendances with a discharge outcome of “admission,” “transferred to another hospital for admission” and “died in department.” The data set represents demographics, registration details, triage assessment (triage category was allocated using the Irish Children's Triage System ), hospital usage, past medical history, first ED location allocated to post‐triage (grouped into patients going to resuscitation and “other” for all other ED spaces), and first clinician type seen. The raw data set was analyzed using location, dispersion, and shape statistics for continuous variables and stacked bar charts for categorical variables. After the data understanding process, the final data set was formalized through feature engineering tasks (Table 1). All continuous variables were converted to categorical after an initial evaluation through a LR model using their original state, with the decision to transform based on model performance.
TABLE 1

Feature engineering tasks performed to transform variables

Variable Transformation description
Disposition (outcome variable)Transformed to 0 or 1, assigned a value of 1 for discharge outcome of “admission” or “transferred to another hospital for admission” 22 , 28 , 29 or “died in department.” 3 All other discharge outcomes defaulted to 0.
AgeSplit into 5 groups consisting of neonate (0–28 days), infant (28 days–1 year), preschool (2–5 years), school age (6–12 years), and adolescent (13–18 years).
Registration hourGrouped into 4‐hour intervals, 25 , 30 0–4, 4–8, 8–12, 12–16, 16–20, and 20–24.
Arrival modeRecoded to “Ambulance,” “Private Transport,” and “Other.”
Referral sourceGrouped into “Self,” “Other Hospitals,” “General Practitioner,” “Clinic,” and “Other.”
Triage categoryTo mitigate against possible quasi‐complete separation 31 for triage 1 (most acute) and 5 we grouped triage into 3 categories: 1–2, 3, and 4–5
Presenting complaintTo lower the cardinality that may impact model performance, 32 presenting complaint categories were reduced based on clinician expertise, from 177 separate categories down to 58.
Complex chronic conditionsEleven new binary variables based on pediatric complex chronic conditions. 33 These were derived using the ICD‐10‐AM diagnosis from all previous admissions.
Diagnosis related groupsBased on specific cohorts of patients frequenting this facility, the last 3 years admission diagnosis related groups were included as binary variables. Blood immunology and digestive system groups were created.
Distance travelledCalculated using the GPS coordinates from the patient's address to the hospital site and grouped into kilometer groupings of 0–2, 2–4, 4–6, 6–10, 10–20, 20–40, 40–60, 60–100, and 100+ kilometers.
Emergency department locationFirst location the patient assigned to at the end of triage which was grouped into “Resuscitation” and “Other.”
Clinician typeType of clinician patient assigned to post‐triage grouped into “Advanced Nurse Practitioner” and “Other.”
Infection control alertEncoded to 1 if value is present and 0 if absent.
Number of emergency department attendances in the last yearRecoded into groups of 0, 1, 2, 3–4, and 5+ attendances.
Number of admissions in the last yearRecoded into groups of 0, 1, 2, 3, 4, and 5+ admissions.
Admitted in last 7 daysEncoded to 1 if value is present and 0 if absent.
Admitted in last 30 daysEncoded to 1 if value is present and 0 if absent.
Admitted in last 3 yearsEncoded to 1 if value is present and 0 if absent.
Any previous admissionEncoded to 1 if value is present and 0 if absent.

Abbreviation: ICD‐10‐AM, International Statistical Classification of Diseases, Tenth Revision, Australian Modification

Feature engineering tasks performed to transform variables Abbreviation: ICD‐10‐AM, International Statistical Classification of Diseases, Tenth Revision, Australian Modification

Statistical analyses

Descriptive statistics consisting of the number of visits and percentage were used to analyze the data. χ2 statistics were used to assess independence of each variable with respect to the outcome. Following previous research , , the data were partitioned into approximately 70% training and 30% testing using random sampling, while maintaining the relative ratios of the values in the variables. The training set was used in model creation and the test set was used for internal validation, providing an unbiased evaluation of the resulting models. The 3 machine learning algorithms; LR, , , , , , naïve Bayes (NB), , and gradient boosting machine (GBM) , , were selected based on their use in previous research. Sampling approaches and detailed configuration for the machine learning algorithms can be found in Table SE1 (Appendix E1). Evaluation of model performance using the test set was measured primarily using AUC with 95% confidence intervals (CIs) using the DeLong method. Sensitivity, specificity, accuracy, positive prediction value (PPV), and negative prediction value (NPV) were used as the secondary measurements with 95% CI. To compare the secondary measurements across the different machine learning algorithms, the method of threshold‐moving by fixing the specificity at 90% was applied. The variables of importance were obtained from the model with the highest AUC, for the GBM model this was achieved by using relative influence, based on the associated average decrease in mean squared error. The top variables of importance from the GBM reference model were used to create an additional low‐dimensional model that could potentially be deployed as a decision support tool.

RESULTS

Characteristics of study dataset

The combined ED census for 2017 and 2018 was 75,676. A total of 3447 attendances were excluded from this study. Most patients (66%) who did not wait had left before triage or were low acuity (Category 4–5). Attendances excluded solely due to missing data were 357 (less than 0.5%). The final data set had 72,229 attendances related to 44,944 unique patients. The admission rate at approximately 15% was distributed between the training and test sets, emphasizing the class imbalance between “admission” and “discharged” (Figure 1).
FIGURE 1

Total visits to the emergency department in 2017 and 2018, summarizing exclusions and data partitioning between training and test set

Total visits to the emergency department in 2017 and 2018, summarizing exclusions and data partitioning between training and test set Neonates had the highest rate of admission at 34.4%, but lowest rate of attendances to the ED (1.9%). Most admissions are infants at 35.5%. The admission rate increased according to the triage category, showing a rate of 39.5% for triage 1–2 and down to 5% for triage 4–5. Patients allocated at triage to one of the resuscitation bays showed a high percentage rate of admission at 55.2%. The greater the distance the patient must travel to the ED, the more likely that this will result in an outcome of admission, with admitting rates ranging from 9.9% for 0–2 km up to 29.3% for patients travelling 100 km or more. Most patients arrive using private transport (94.6%), but those arriving by ambulance have a higher admission rate (32.4%). Patients referred from other hospitals have a greater rate of admission at 47.5%. Interestingly 58.4% of the patient visits had no previous attendance to the ED and 84.5% had no previous admissions, both applicable to within 1 year of the patient's attendance; 63.2% of the ED visits had no previous admissions. The presenting complaint of “injury” had the highest number of attendances, making up 17.1% of the total, with a 5% admission rate. Although, psychiatric presentations accounted for just 0.5% of the attendances, the admission rate was the highest at 44.1% (Table 2).
TABLE 2

Descriptive statistics for each variable with respect to the outcome

Admission Discharged
Variable Value No. visits % of visits No. visits % of visits P value
Age<0.001
0–28 days4794.5%9151.5%
28 days–23 months381835.5%16,46926.8%
2–5 years256723.8%17,83029.0%
6–12 years256423.8%18,87830.7%
13–18 years133612.4%737312.0%
Sex0.036
Male592655.1%34,50956.1%
Female483844.9%26,95643.9%
Arrival mode<.001
Private transport951588.4%58,82395.7%
Ambulance123211.4%25664.2%
Other170.2%760.1%
Referral source<0.001
Self677162.9%42,87269.8%
Other hospitals107210.0%11841.9%
Swift/other clinic1191.1%10731.7%
General Practitioner270125.1%16,01926.1%
Other1010.9%3170.5%
Distance traveled<0.001
0–2K7236.7%656610.7%
2–4K143013.3%11,85219.3%
4–6K173216.1%988516.1%
6–10K163815.2%948015.4%
10–20K190417.7%10,22316.6%
20–40K148913.8%703911.5%
40–60K6345.9%26964.4%
60–100K7957.4%27124.4%
100+K4193.9%10121.6%
Triage category<0.001
0–1512847.6%786912.8%
3409338.0%24,49539.8%
4‐5154314.3%29,12047.4%
Emergency department location<0.001
Resuscitation242022.5%19683.2%
Other834477.5%59,49796.8%
Number of attendances in last year<0.001
0614257.1%36,06358.7%
1209319.4%13,24321.5%
210349.6%57089.3%
3–49258.6%42066.8%
5+5705.3%22453.7%
Number of admissions in last year<0.001
0804674.7%53,02286.3%
1135312.6%55959.1%
25535.1%15022.4%
32892.7%5600.9%
41711.6%2940.5%
5+3523.3%4920.8%
Admission history<0.001
0568252.8%39,97865.0%
1508247.2%21,48735.0%
Presenting complaint (top 5)<0.001
Injury6115.7%11,72319.1%
Vomiting107910.0%38316.2%
Difficulty breathing110110.2%35725.8%
Abdominal pain6135.7%35995.9%
Fever6446.0%29164.7%
Descriptive statistics for each variable with respect to the outcome The χ2 test for each variable produced a P < 0.001, except for sex, weekday, month, and the Complex Chronic Condition of “Miscellaneous Other.” As a stepwise approach was taken to model creation including these variables had no negative impact on performance, with the Akaike information criterion reducing with the addition of each variable (some of these steps are presented in Table 3). Further descriptive statistics and results of the χ2 tests for significance are presented in Table SE2 (Appendix E1).
TABLE 3

Stepwise approach to variable inclusion using logistic regression

VariablesAICDelta AICAUC (95% CI)
Triage only (1,2,3,4,5)0.745 (0.738–0.752)
Triage only (grouped 1–2, 3, 4–5)36,207.74895.00.740 (0.734–0.747)
+ Age group35,986.34673.60.748 (0.739–0.757)
+ Arrival mode35,871.64558.90.757 (0.750–0.764)
+ Referral source33,931.42618.70.793 (0.785–0.799)
+ Distance in kilometers33,719.12406.40.800 (0.793–0.808)
+ Admission history33,363.22050.50.805 (0.799–0.812)
+ Reattender (within 7 days)33,115.31802.60.813 (0.806–0.820)
+ Presenting complaint32,223.3910.60.830 (0.823–0.837)
+ Emergency department location (resuscitation or other)31,694.2381.50.829 (0.822–0.837)
+ Admitted in last 3 years31,691.0378.30.827 (0.820–0.834)
+ Number of visits in last year31,667.0354.30.832 (0.826–0.839)
+ Blood immunology group31,595.2282.50.837 (0.831–0.844)
+ Digestive system group31,579.5266.80.835 (0.828–0.841)
+ Admitted in last 7 days31,547.4234.70.834 (0.827–0.841)
+ Admitted in last 30 days31,529.5216.80.835 (0.828–0.842)
+ Number of admissions in last year31,475.1162.40.834 (0.827–0.840)
+ Clinician type (ANP or other)31,413.4100.70.834 (0.828–0.841)
+ Infection control alert31,409.897.10.835 (0.828–0.842)
+ Complex chronic conditions31,379.767.00.832 (0.825–0.839)
+ Registration hour31,347.634.90.838 (0.832–0.845)
+ Sex31,326.213.50.834 (0.827–0.841)
+ Weekday31,320.37.60.834 (0.827–0.841)
+ Registration month (all variables)31,312.700.845 (0.838–0.852)

Note: Delta AIC shows the difference in AIC between the model with the best fit (lowest AIC) and the comparison model.

Abbreviations: AIC, Akaike information criterion; ANP, advanced nurse practitioner; AUC, area under the curve; CI, confidence interval.

Stepwise approach to variable inclusion using logistic regression Note: Delta AIC shows the difference in AIC between the model with the best fit (lowest AIC) and the comparison model. Abbreviations: AIC, Akaike information criterion; ANP, advanced nurse practitioner; AUC, area under the curve; CI, confidence interval.

Model generation and evaluation

We observed no significant improvement in AUC by applying the additional sampling methods. The GBM model achieved the highest AUC at 0.853 (95% CI, 0.846–0.859) and the highest sensitivity at 56.39 (95% CI, 54.71–58.06), accuracy 85.07 (95% CI, 84.61–85.53), PPV 49.21 (95% CI, 47.63–50.79), and NPV 92.32 (95% CI, 91.93–92.69), making it the best performing model based on AUC alone (Table 4). Results of the application of additional sampling methods can be found in Table SE3 (Appendix E1).
TABLE 4

Performance of machine learning algorithm at a fixed specificity of 90%, evaluated using the test set

Machine Learning Algorithm AUC % Sensitivity % Accuracy % PPV % NPV
(95% CI) (95% CI) (95% CI) (95% CI) (95% CI)
Naïve Bayes0.81246.0083.5544.1590.66
(0.805–0.819)(44.32–47.69)(83.07–84.02)(42.51–45.79)(90.24–91.06)
Logistic regression0.84555.2884.9148.7192.14
(0.838–0.852)(53.60–56.96)(84.45–85.37)(47.13–50.30)(91.75–92.51)
Gradient boosting machine0.85356.3985.0749.2192.32
(0.846–0.859)(54.71–58.06)(84.61–85.53)(47.63–50.79)(91.93–92.69)

Abbreviations: AUC, area under the curve; CI, confidence interval; NPV, negative predictive value; PPV, positive predictive value.

Performance of machine learning algorithm at a fixed specificity of 90%, evaluated using the test set Abbreviations: AUC, area under the curve; CI, confidence interval; NPV, negative predictive value; PPV, positive predictive value. Incidentally, we also looked at the impact of removing patients presenting with a psychiatric complaint from the LR model due to the high admission rate. This resulted in an AUC of 0.842 (95% CI, 0.835–0.849) compared to 0.845 (95% CI, 0.838–0.852) for the model that included psychiatric presentations. Variables representing presenting complaint, triage category, referral source, registration month, first ED location after triage (resuscitation or other), distance traveled, history of any admission, and weekday were the top predictors from our GBM model (Figure 2). The rationale for selecting the top 8 variables was based on the resulting AUC. The top 7 variables, which excluded weekday, resulted in an AUC of 0.832 (95% CI, 0.825–0.838) and the top 9 variables (including registration hour) produced an AUC of 0.833 (95% CI, 0.826–0.840). From the top 8 variables, a low‐dimensional GBM model generated an AUC of 0.835 (95% CI, 0.829–0.842) and with a fixed specificity of 90%, produced sensitivity of 52.61 (95% CI, 50.93–54.27), accuracy of 84.58 (95% CI, 84.12–85.03), PPV 47.16 (95% CI, 45.59–48.75), and NPV 91.80 (95% CI, 91.41–92.17).
FIGURE 2

Variable importance according to the reference model for gradient boosting machine. Importance measured by the average decrease in mean squared error. Abbreviations: CCC, complex chronic condition; DRG, diagnosis related group; ED, emergency department

Variable importance according to the reference model for gradient boosting machine. Importance measured by the average decrease in mean squared error. Abbreviations: CCC, complex chronic condition; DRG, diagnosis related group; ED, emergency department

LIMITATIONS

Although, this research was limited to data from 1 ED of a tertiary standalone pediatric hospital in Ireland, our study shows that the variables of importance consist of data that are routinely collected in most hospitals, therefore increasing model generalizability. Initial generalization in our approach to developing a low‐dimensional model was central. We acknowledge data, such as serial age specific vitals, prehospital interventions, pain scores, medications, anthropometrics, and tests ordered could potentially improve model performance, but the objective of this study was to develop a model based on limited electronic data availability. Outlined in our protocol, predictor‐based limitations include the lack of standardization of presenting complaints, triage categories, and the grouping of ED location into “resuscitation” or “other,” whose inclusion proved significant. Although, a strong predictor in our model, the history of admission may contain past admissions unrelated to the reason for presentation to the ED.

DISCUSSION

This study has created prediction models that are pediatric focused, , , concentrating on low‐dimensional early prediction, , , comparing the performance of different machine learning algorithms , , to identify the optimal model and proposed utility for certain stakeholders (clinicians and bed flow managers). Machine learning algorithms can predict the probability of admission and discharge in the pediatric ED using routinely collected data captured up to the post‐triage process. We identified 8 key predictors to create a further low‐dimensional prediction model demonstrating what can be achieved with a minimum data set, whose predictive ability was not much different to the model that included 33 variables. Our GBM reference model with all predictors produced an AUC of 0.853, which outperforms many other studies. , , , Compared to other pediatric studies, Goto et al focused on hospitalization at triage predication and achieved an AUC of 0.80. Barak‐Corren et al used a progressive time approach and achieved an AUC of 0.868 after 10 minutes and 0.913 after 60 minutes; however, the study included data volumes (6009 variables) that may not be available in many hospitals. A lower number of variables may be more readily accessible and to generalize further, we followed the approach by Hong et al of identifying the top variables of importance from the best performing model to create a further low‐dimensional model. The low‐dimensional model was developed to demonstrate what could be achieved with even fewer variables, the GBM model achieved an AUC of 0.835. Apparent from this model is the exclusion of patient age, which has been described as highly significant in many studies, , , indicating a difference between general (adult and pediatric) and exclusively pediatric settings. Age is a central component to the Irish Children's Triage System, and therefore it is still reflected in the model. Also evident is the position of “admission history” as a top predictor, pediatric patient history will be shorter and including an indication of any previous admissions proved significant. Other hospitals with limited data may adopt a similar approach of discovering useful variables that may differ from variables/characteristics used in this and previous research to develop a prediction model. Although the output of this prediction model could be used as a binary score, the output as a probability would have far more use in practice. For ED clinician use, instead of limiting the predictive capabilities to admission or discharge, the probability output could be used for streaming into “assessment for admission,” “fast track for discharge,” and “senior review.” Compared to a non‐streamed approach, evidence has shown that grouping ED patients into work streams results in a reduction in wait times and a shorter length of stay in the ED, with limited evidence that grouping patients solely based on whether or not they are going to be hospitalized improves ED patient flow. , Sun et al proposed the model threshold should be adjusted based on the intended use in practice (clinician or bed manager). As our model produces a lower sensitivity, the 8‐variable model with the threshold set at 0.5 achieves a specificity of 97.27 and could be used by clinicians to identify patients to “fast track for discharge” or “assessment for admission.” For patients that achieved an intermediate probability, these may be more appropriate for the “senior review” stream as their decision to admit or discharge may be more difficult to assess. In the ED, senior clinician review improves patient safety and departmental flow by increasing the accuracy of disposition decisions. , For bed management use, a low threshold of 0.28 on our model produces a sensitivity of 52.61. This would assist bed managers in identifying patients to be admitted, therefore speeding up the advance bed planning process. Summing the individual admission probabilities would provide a total number of beds required, without over inflating. Bed managers might also use this total bed demand to make decisions like expediting inpatient discharges to create capacity. There are many different approaches to deploying the output of the model. Some researchers suggest embedding the prediction results within the electronic medical record, , , which would be useful as additional decision support information in ED patient tracking systems. System alerts could be used to prompt triage nurses to trigger a bed request for patients with a high probability of admission, while awaiting clinical decision making. A simpler approach would be for the model to power a dynamic dashboard, the output of our prediction models could be displayed on a real‐time dashboard for clinician or bed management use. Many groups have highlighted the benefit of using dashboards in the ED to visualize current status and overcrowding, , , adding an indication of potential admissions and patients to be fast tracked for discharge at an early stage could significantly improve situational awareness expediting patient care. Prototypes demonstrating the intended use are shown in Figures SE1 and SE2 (Appendix E1). This study has developed a low‐dimensional model that predicts admissions and discharges from a pediatric ED using data collected early in the patient's journey. Future work could analyze both the false positives and false negatives to gain an understanding of the implementation of these prediction models. Further research could expand this study beyond 1 hospital site to a network of hospitals. This would further the development of a robust and reliable model capable of positively affecting the patient's journey.

CONFLICTS OF INTEREST

All authors have no conflicts of interest to disclose.

AUTHOR CONTRIBUTIONS

Fiona Leonard, John Gilligan, and Michael J. Barrett conceived the study. Fiona Leonard extracted the data, carried out the analyses (with guidance from John Gilligan and Michael J. Barrett), and prepared and finalized the manuscript. Fiona Leonard, John Gilligan, and Michael J. Barrett have made a substantial contribution to this manuscript, give their final approval of the submitted version, and agree to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved. Supporting Information Click here for additional data file.
  38 in total

1.  Predicting hospital admission in a pediatric Emergency Department using an Artificial Neural Network.

Authors:  Jeffrey Leegon; Ian Jones; Kevin Lanaghan; Dominik Aronsky
Journal:  AMIA Annu Symp Proc       Date:  2006

2.  Progressive prediction of hospitalisation in the emergency department: uncovering hidden patterns to improve patient flow.

Authors:  Yuval Barak-Corren; Shlomo Hanan Israelit; Ben Y Reis
Journal:  Emerg Med J       Date:  2017-02-10       Impact factor: 2.740

3.  Early prediction of hospital admission for emergency department patients: a comparison between patients younger or older than 70 years.

Authors:  Jacinta A Lucke; Jelle de Gelder; Fleur Clarijs; Christian Heringhaus; Anton J M de Craen; Anne J Fogteloo; Gerard J Blauw; Bas de Groot; Simon P Mooijaart
Journal:  Emerg Med J       Date:  2017-08-16       Impact factor: 2.740

4.  Predicting emergency department inpatient admissions to improve same-day patient flow.

Authors:  Jordan S Peck; James C Benneyan; Deborah J Nightingale; Stephan A Gaehde
Journal:  Acad Emerg Med       Date:  2012-09       Impact factor: 3.451

5.  Can emergency department nurses performing triage predict the need for admission?

Authors:  Iain Beardsell; Sarah Robinson
Journal:  Emerg Med J       Date:  2010-10-20       Impact factor: 2.740

6.  Emergency department overcrowding - implications for paediatric emergency medicine.

Authors:  Douglas Sinclair
Journal:  Paediatr Child Health       Date:  2007-07       Impact factor: 2.253

7.  Machine-Learning-Based Electronic Triage More Accurately Differentiates Patients With Respect to Clinical Outcomes Compared With the Emergency Severity Index.

Authors:  Scott Levin; Matthew Toerper; Eric Hamrock; Jeremiah S Hinson; Sean Barnes; Heather Gardner; Andrea Dugas; Bob Linton; Tom Kirsch; Gabor Kelen
Journal:  Ann Emerg Med       Date:  2017-09-06       Impact factor: 5.721

8.  Transparent Reporting of a multivariable prediction model for Individual Prognosis or Diagnosis (TRIPOD): the TRIPOD statement.

Authors:  Gary S Collins; Johannes B Reitsma; Douglas G Altman; Karel G M Moons
Journal:  Ann Intern Med       Date:  2015-01-06       Impact factor: 25.391

9.  A simple tool to predict admission at the time of triage.

Authors:  Allan Cameron; Kenneth Rodgers; Alastair Ireland; Ravi Jamdar; Gerard A McKay
Journal:  Emerg Med J       Date:  2014-01-13       Impact factor: 2.740

10.  Machine Learning-Based Prediction of Clinical Outcomes for Children During Emergency Department Triage.

Authors:  Tadahiro Goto; Carlos A Camargo; Mohammad Kamal Faridi; Robert J Freishtat; Kohei Hasegawa
Journal:  JAMA Netw Open       Date:  2019-01-04
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.