| Literature DB >> 34946357 |
Md Mohaimenul Islam1,2,3, Guo-Hung Li2, Tahmina Nasrin Poly1,3,4, Yu-Chuan Jack Li1,2,3,4,5,6.
Abstract
Nowadays, the use of diagnosis-related groups (DRGs) has been increased to claim reimbursement for inpatient care. The overall benefits of using DRGs depend upon the accuracy of clinical coding to obtain reasonable reimbursement. However, the selection of appropriate codes is always challenging and requires professional expertise. The rate of incorrect DRGs is always high due to the heavy workload, poor quality of documentation, and lack of computer assistance. We therefore developed deep learning (DL) models to predict the primary diagnosis for appropriate reimbursement and improving hospital performance. A dataset consisting of 81,486 patients with 128,105 episodes was used for model training and testing. Patients' age, sex, drugs, diseases, laboratory tests, procedures, and operation history were used as inputs to our multiclass prediction model. Gated recurrent unit (GRU) and artificial neural network (ANN) models were developed to predict 200 primary diagnoses. The performance of the DL models was measured by the area under the receiver operating curve, precision, recall, and F1 score. Of the two DL models, the GRU method, had the best performance in predicting the primary diagnosis (AUC: 0.99, precision: 83.2%, and recall: 66.0%). However, the performance of ANN model for DRGs prediction achieved AUC of 0.99 with a precision of 0.82 and recall of 0.57. The findings of our study show that DL algorithms, especially GRU, can be used to develop DRGs prediction models for identifying primary diagnosis accurately. DeepDRGs would help to claim appropriate financial incentives, enable proper utilization of medical resources, and improve hospital performance.Entities:
Keywords: artificial intelligence; deep learning; diagnosis-related groups; hospital expenditure
Year: 2021 PMID: 34946357 PMCID: PMC8701302 DOI: 10.3390/healthcare9121632
Source DB: PubMed Journal: Healthcare (Basel) ISSN: 2227-9032
Figure 1Simple structure of ANN.
Figure 2Architecture of gated recurrent unit.
Figure 3The methodological framework used in this study.
Figure 4Architecture of GRU model.
Figure 5Testing loss of the GRU model.
Basic characteristics of patients included.
| Variable | Number/Percentage |
|---|---|
| Total number of episodes | 128,105 |
| Total number of patients | 81,486 |
| Age range | |
| Age group | |
| 0~20 | 4.51% |
| 20~40 | 54.76% |
| 40~60 | 42.79% |
| >60 | 0.02% |
| Gender | |
| Male | 74.65% |
| Female | 25.35% |
| Operation | |
| Yes | 87.78% |
| No | 12.22% |
| Additional diagnosis | |
| Yes | 70.98% |
| No | 29.02% |
| Procedure | |
| Yes | 98.82% |
| No | 1.18% |
| Drug | |
| Yes | 99.58% |
| No | 0.42% |
| Number of drugs input | 461 |
| Number of diseases input | 200 |
| Number of procedures input | 1636 |
| Number of operations input | 927 |
| Number of output | 200 |
Performance of deep learning models.
| Model | Precision | Recall | F1-Score | Accuracy | AUROC | Ranking Loss |
|---|---|---|---|---|---|---|
| GRU | 0.83 | 0.66 | 0.73 | 0.72 | 0.99 | 0.01 |
| ANN | 0.82 | 0.57 | 0.67 | 0.68 | 0.99 | 0.01 |
#Note: micro-AUROC.
Sensitivity analysis.
| Basic Info | Drug | Procedure | Operation | Additional ICD | Precision | Recall | F1-Score | Accuracy | Micro-AUC | Label Ranking Loss |
|---|---|---|---|---|---|---|---|---|---|---|
| V | V | V | V | V | 0.83 | 0.65 | 0.73 | 0.726 | 0.99 | 0.01 |
| V | V | V | V | 0.76 | 0.60 | 0.67 | 0.671 | 0.99 | 0.01 | |
| V | V | V | V | 0.70 | 0.31 | 0.43 | 0.481 | 0.97 | 0.03 | |
| V | V | V | V | 0.55 | 0.42 | 0.47 | 0.465 | 0.92 | 0.06 | |
| V | V | V | V | 0.81 | 0.56 | 0.66 | 0.632 | 0.98 | 0.03 | |
| V | V | 0.08 | 0.02 | 0.04 | 0.059 | 0.79 | 0.16 | |||
| V | V | 0.26 | 0.04 | 0.07 | 0.211 | 0.92 | 0.08 | |||
| V | V | 0.52 | 0.33 | 0.41 | 0.373 | 0.88 | 0.09 | |||
| V | V | 0.01 | 0.005 | 0.006 | 0.026 | 0.75 | 0.19 | |||
| V | 0.001 | 0 | 0.001 | 0.006 | 0.73 | 0.21 |
Evaluation of the performance of GRU for predicting primary diagnosis.
| Example | Age | Sex | Original Primary Diagnosis | Predicted Primary Diagnosis | Top 5 Primary Diagnoses |
|---|---|---|---|---|---|
| Patient #1 | 20–40 | Male | Calculus of ureter | Calculus of ureter | 1. Calculus of ureter. |
| Patient #2 | 20–40 | Female | Calculus of kidney | Calculus of kidney | 1. Calculus of kidney. |
| Patient #3 | 20–40 | Male | Malignant bladder neoplasm, part unspecified. | Malignant bladder neoplasm, part unspecified. | 1. Malignant bladder neoplasm, part unspecified. |
| Patient #4 | 20–40 | Male | Malignant bladder neoplasm, other specified sites. | Malignant bladder neoplasm, part unspecified. | 1. Malignant bladder neoplasm, part unspecified. |
| Patient #5 | 20–40 | Male | Acute pyelonephritis without lesion of renal medullary necrosis. | Urinary tract infection, site not specified. | 1. Urinary tract infection, site not specified. |