| Literature DB >> 34958305 |
Li-Hung Yao1, Ka-Chun Leung1, Chu-Lin Tsai2, Chien-Hua Huang2, Li-Chen Fu1.
Abstract
BACKGROUND: Emergency department (ED) crowding has resulted in delayed patient treatment and has become a universal health care problem. Although a triage system, such as the 5-level emergency severity index, somewhat improves the process of ED treatment, it still heavily relies on the nurse's subjective judgment and triages too many patients to emergency severity index level 3 in current practice. Hence, a system that can help clinicians accurately triage a patient's condition is imperative.Entities:
Keywords: data to text; deep learning; electronic health record; emergency department; hospital admission; triage system
Mesh:
Year: 2021 PMID: 34958305 PMCID: PMC8749584 DOI: 10.2196/27008
Source DB: PubMed Journal: J Med Internet Res ISSN: 1438-8871 Impact factor: 5.428
Figure 1System overview. EMR: electronic medical record; ED: emergency department.
Figure 2The method of data transformation.
Figure 3Network architecture of the triage engine. BiGRU: bidirectional gated recurrent unit; CNN: convolutional neural network; GRU: gated recurrent unit; RNN: recurrent neural network.
Figure 4Recurrent neural network-type part of the triage engine. BiGRU: Bidirectional gated recurrent unit.
Figure 5Convolutional neural network–type part of the triage engine. CNN: convolutional neural network.
Figure 6Architecture of pyramid convolutional neural network for text.
Performance on the National Hospital Ambulatory Medical Care Survey data set using different methods.
| Model | Sensitivity | Specificity | Accuracy | AUROCa |
| BiLSTMb only | 0.756 | 0.768 | 0.767 | 0.850 |
| BiLSTM+Attc | 0.711 | 0.822 | 0.809 | 0.854 |
| BiLSTM+2×Att | 0.745 | 0.802 | 0.796 | 0.856 |
| BiGRUd only | 0.744 | 0.78 | 0.776 | 0.854 |
| BiGRU+Att | 0.757 | 0.804 | 0.798 | 0.863 |
| BiGRU+2×Att | 0.764 | 0.809 | 0.801 | 0.866 |
| CNNse (with 3 kernels) | 0.756 | 0.768 | 0.767 | 0.85 |
| Pyramid CNN (3 kernels) | 0.727 | 0.813 | 0.804 | 0.855 |
| Pyramid CNN (3 kernels) with attention layer | 0.731 | 0.825 | 0.819 | 0.862 |
| Our model | 0.755 |
|
|
|
aAUROC: area under the receiver operating characteristic curve.
bBiLSTM: bidirectional Long Short-Term Memory.
cAtt: attention layer.
dBiGRU: bidirectional gated recurrent unit.
eCNN: convolutional neural network.
fItalicization indicates that the best performance was shown by our model in the metric among the different models.
Comparison with baseline algorithms in the National Hospital Ambulatory Medical Care Survey data set.
| Model | Sensitivity | Specificity | Precision | F1 score | Accuracy | AUROCa |
| Logistic regression | 0.747 | 0.741 | 0.745 | 0.745 | 0.744 | 0.825 |
| XGBoostb | 0.761 | 0.736 | 0.749 | 0.748 | 0.748 | 0.834 |
| Random forest | 0.781 | 0.715 | 0.748 | 0.747 | 0.747 | 0.828 |
| BERTc | 0.789 | 0.768 | 0.773 | 0.781 | 0.779 | 0.852 |
| Our model | 0.755 |
|
| 0.759 |
|
|
aAUROC: area under the receiver operating characteristic curve.
bXGBoost: extreme gradient boosting.
cBERT: Bidirectional Encoder Representations From Transformers.
dItalicization indicates that the best performance was shown by our model in the metric among the different models.
Performance on the National Taiwan University Hospital data set using different methods.
| Method | Sensitivity | Specificity | Accuracy | AUROCa |
| BiLSTMb only | 0.748 | 0.792 | 0.77 | 0.848 |
| BiLSTM+Attc | 0.74 | 0.822 | 0.781 | 0.862 |
| BiLSTM+2×Att | 0.774 | 0.8 | 0.785 | 0.867 |
| BiGRUd only | 0.768 | 0.78 | 0.774 | 0.855 |
| BiGRU+Att | 0.805 | 0.767 | 0.786 | 0.866 |
| BiGRU+2×Att | 0.8 | 0.785 | 0.808 | 0.872 |
| CNNse (with 3 kernels) | 0.78 | 0.803 | 0.791 | 0.868 |
| Pyramid CNN (3 kernels) | 0.784 | 0.793 | 0.798 | 0.868 |
| Pyramid CNN (3 kernels) with attention layer | 0.754 | 0.823 | 0.788 | 0.871 |
| Our model | 0.768 | 0.819 |
|
|
aAUROC: area under the receiver operating characteristic curve.
bBiLSTM: bidirectional Long Short-Term Memory.
cATT: attention layer.
dBiGRU: bidirectional gated recurrent unit.
eCNN: convolutional neural network.
fItalicization indicates that the best performance was shown by our model in the metric among the different models.
Comparison with baseline algorithms in the National Taiwan University Hospital data set.
| Model | Sensitivity | Specificity | Precision | F1 score | Accuracy | AUROCa |
| Logistic regression | 0.705 | 0.805 | 0.758 | 0.755 | 0.756 | 0.83 |
| XGBoostb | 0.745 | 0.785 | 0.766 | 0.765 | 0.765 | 0.84 |
| Random forest | 0.739 | 0.784 | 0.762 | 0.761 | 0.762 | 0.84 |
| DNNc+BiGRUd | 0.744 | 0.775 | 0.771 | 0.766 | 0.771 | 0.858 |
| BERTe | 0.736 | 0.789 | 0.777 | 0.756 | 0.763 | 0.844 |
| Our model |
|
|
|
|
|
|
aAUROC: area under the receiver operating characteristic curve.
bXGBoost: extreme gradient boosting.
cDNN: deep neural network.
dBiGRU: bidirectional gated recurrent unit.
eBERT: Bidirectional Encoder Representations From Transformers.
fItalicization indicates that the best performance was shown by our model in the metric among the different models.
Performance of different research studies.
| Study | Methods | Data set | Performance | |||
|
|
|
| Sensitivity | Specificity | Accuracy | AUROCa |
| Raita et al [ | DNNb | NHAMCSc | 0.79 | 0.71 | —d | 0.82 |
| Zhang et al [ | NLPe+PCAf+LRg | NHAMCS | — | — | — | 0.846 |
| Yan Sun et al [ | LR | Private | — | — | — | 0.849 |
| Graham et al [ | GBMh | Private | 0.535 | 0.899 | 0.8 | 0.859 |
| Our model | BiGRUi+ Attj+ PyCNNk | NHAMCS | 0.654 |
|
|
|
| Our model | BiGRU+ Att+ PyCNN | NTUHm | 0.606 | 0.852 | 0.806 |
|
aAUROC: area under the receiver operating characteristic curve.
bDNN: deep neural network.
cNHAMCS: National Hospital Ambulatory Medical Care Survey.
dNot available.
eNLP: natural language processing.
fPCA: principal component analysis.
gLR: logistic regression.
hGBM: gradient boosted machines.
iBiGRU: bidirectional gated recurrent unit.
jAtt: attention layer.
kPyCNN: pyramid convolutional neural network.
lItalicization indicates that the best performance was shown by our model in the metric among the different models.
mNTUH: National Taiwan University Hospital.
Performance of mortality rate prediction on the National Taiwan University Hospital data set.
| Model | Sensitivity | Specificity | Precision | F1 score | Accuracy | AUROCa |
| Logistic regression | 0.903 | 0.887 | 0.895 | 0.895 | 0.896 | 0.954 |
| XGBoostb | 0.926 | 0.913 | 0.909 | 0.919 | 0.919 | 0.962 |
| Random forest | 0.933 | 0.898 | 0.915 | 0.915 | 0.916 | 0.958 |
| Our model | 0.917 |
|
|
|
|
|
aAUROC: area under the receiver operating characteristic curve.
bXGBoost: extreme gradient boosting.
cItalicization indicates that the best performance was shown by our model in the metric among the different models.
Performance of prediction of intensive care unit admission on the National Hospital Ambulatory Medical Care Survey data set.
| Model | Sensitivity | Specificity | Precision | F1 score | Accuracy | AUROCa |
| Logistic regression | 0.787 | 0.734 | 0.761 | 0.760 | 0.761 | 0.845 |
| XGBoostb | 0.823 | 0.708 | 0.769 | 0.764 | 0.765 | 0.849 |
| Random forest | 0.876 | 0.707 | 0.800 | 0.790 | 0.792 | 0.861 |
| Our model | 0.805 |
|
|
|
|
|
aAUROC: area under the receiver operating characteristic curve.
bXGBoost: extreme gradient boosting.
cItalicization indicates that the best performance was shown by our model in the metric among the different models.
Performance of prediction of intensive care unit admission on the National Taiwan University Hospital data set.
| Model | Sensitivity | Specificity | Precision | F1 score | Accuracy | AUROCa |
| Logistic regression | 0.811 | 0.846 | 0.829 | 0.828 | 0.828 | 0.905 |
| XGBoostb | 0.831 | 0.831 | 0.829 | 0.832 | 0.830 | 0.917 |
| Random forest | 0.833 | 0.828 | 0.831 | 0.830 | 0.831 | 0.911 |
| Our model | 0.823 |
|
|
|
|
|
aAUROC: area under the receiver operating characteristic curve.
bXGBoost: extreme gradient boosting.
cItalicization indicates that the best performance was shown by our model in the metric among the different models.