| Literature DB >> 29636633 |
Loan R van Hoeven1,2, Aukje L Kreuger3,4, Kit Cb Roes1, Peter F Kemper2,4, Hendrik Koffijberg5, Floris J Kranenburg3,4,6, Jan Mm Rondeel7, Mart P Janssen1,2.
Abstract
BACKGROUND: To enhance the utility of transfusion data for research, ideally every transfusion should be linked to a primary clinical indication. In electronic patient records, many diagnostic and procedural codes are registered, but unfortunately, it is usually not specified which one is the reason for transfusion. Therefore, a method is needed to determine the most likely indication for transfusion in an automated way. STUDY DESIGN AND METHODS: An algorithm to identify the most likely transfusion indication was developed and evaluated against a gold standard based on the review of medical records for 234 cases by 2 experts. In a second step, information on misclassification was used to fine-tune the initial algorithm. The adapted algorithm predicts, out of all data available, the most likely indication for transfusion using information on medical specialism, surgical procedures, and diagnosis and procedure dates relative to the transfusion date.Entities:
Keywords: electronic health record data; indication for transfusion; selection algorithm
Year: 2018 PMID: 29636633 PMCID: PMC5881526 DOI: 10.2147/CLEP.S147142
Source DB: PubMed Journal: Clin Epidemiol ISSN: 1179-1349 Impact factor: 4.790
Number of diagnosis codes per patient from which the algorithm had to choose the one most likely to be the transfusion indication
| Diagnoses per patient | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | ≥13 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| % of patients | 2 | 16 | 27 | 23 | 14 | 8 | 5 | 2 | 1 | 1 | 0 | 0.2 | 0.05 | 0.03 |
Figure 1Initial algorithm rules.
Notes: (A) Diagnosis selection. (B) Procedure selection.
Order of diagnosis and procedure specialisms used for attributing indication to transfusion in the initial algorithm and after adjustment in the adapted algorithm, from high to low priority
| Specialism | Order diagnoses | Update diagnoses | Order procedures |
|---|---|---|---|
| Cardiopulmonary surgery | 1 | 1 | 1 |
| Gynecology | 2 | 4 | 9 |
| Gastroenterology | 3 | 3 | 10 |
| Internal medicine: hematology | 4 | 2 | 11 |
| Surgery: transplantation | 5 | 5 | 2 |
| Surgery: vascular surgery | 6 | 6 | 3 |
| Surgery: traumatology and first aid | 7 | 7 | 4 |
| Surgery: oncology and lung and gastrointestinal surgery | 8 | 8 | 5 |
| Surgery: general surgery and pediatric surgery | 9 | 9 | 6 |
| Orthopedics | 10 | 10 | 7 |
| Urology | 11 | 11 | 12 |
| Anesthesiology | 12 | 21 | 13 |
| Neurosurgery | 13 | 13 | 8 |
| Throat nose ear | 14 | 17 | 14 |
| Plastic surgery | 15 | 12 | 15 |
| Pediatrics | 16 | 16 | 16 |
| Consultative psychiatry | 17 | 18 | 17 |
| Neurology | 18 | 19 | 18 |
| Cardiology | 19 | 20 | 19 |
| Internal medicine: nonhematology | 20 | 14 | 20 |
| Lung medicine | 21 | 15 | 21 |
| Ophthalmology | 22 | 22 | 22 |
| Clinical geriatrics | 23 | 23 | 23 |
| Radiotherapy | 24 | 24 | 24 |
| Dermatology | 25 | 25 | 25 |
| Rehabilitation medicine | 26 | 26 | 26 |
| Geriatric rehabilitation care | 27 | 27 | 27 |
| Rheumatology | 28 | 28 | 28 |
| Allergology | 29 | × | 29 |
| Clinical genetics | 30 | × | 30 |
| Radiology | 31 | × | 31 |
| Audiology | 32 | × | 32 |
Note: The × symbols indicate not applicable.
Agreement between initial algorithm and gold standard for diagnoses as observed in the sample (n = 234)
| Stratum (sample size) | % correct | Kappa |
|---|---|---|
| Cardiopulmonary surgery (n = 19) | 94.7 | 0.91 |
| Gynecology (n = 12) | 75 | 0.57 |
| Gastroenterology (n = 15) | 86.7 | 0.78 |
| Internal medicine (n = 61) | 44.3 | 0.15 |
| Surgery (n = 18) | 66.7 | 0.50 |
| Orthopedics (n = 16) | 75.0 | 0.58 |
| Other (n = 15) | 20.0 | –0.25 |
| Total specialisms (n = 156) | 60.2 | 0.37 |
| Data quality check (n = 37) | 100 | |
| Specialisms + data quality check (n = 193) | 67.9 | |
| No codes registered (n = 26) | 0 | |
| No gold standard (n = 15) | 0 | |
| Total (n = 234, including cases without diagnoses) | 56.0 |
Notes: The raw % correct in the sample is shown by specialism and in total, showing cases with only one diagnosis option (“data quality check”), cases without a gold standard, and cases without any diagnostic information as separate strata. Kappa provides a measure for chance-adjusted agreement for cases with at least two diagnosis options.
Agreement between initial algorithm and gold standard for procedures as observed in the sample (n = 234)
| Stratum (sample size) | % correct | Kappa, excluding cases without gold standard |
|---|---|---|
| Total specialisms (n = 17) | 82.4 | 0.71 |
| Data quality check (one procedure) (n = 47) | 100 | |
| Specialisms + data quality check (n = 64) | 95.3 | |
| No gold standard (n = 14) | 0 | |
| Total (n = 234, including cases without procedures) | 92.7 |
Notes: The raw % correct in the sample is shown in total and separately for cases with only one procedure option (“data quality check”), cases without a gold standard, and cases without a procedure registered in the time selection. Kappa provides a measure for chance-adjusted agreement for cases with at least two procedure options.
Figure 2Adapted algorithm rules visualized by a decision tree.
Agreement between adapted algorithm and gold standard for the transfusion indication as observed in the sample (n = 234)
| Stratum | % correct | Kappa |
|---|---|---|
| Cardiopulmonary surgery (n = 20) | 95.0 | 0.93 |
| Gynecology (n = 17) | 88.2 | 0.81 |
| Gastroenterology (n = 16) | 75.0 | 0.59 |
| Internal medicine (n = 60) | 73.3 | 0.59 |
| Surgery (n = 22) | 77.3 | 0.66 |
| Orthopedics (n = 20) | 85.0 | 0.78 |
| Other (n = 18) | 38.9 | 0.07 |
| Total specialisms (n = 173) | 75.7 | 0.63 |
| Data quality check (n = 18) | 100 | |
| Specialisms + data quality check (n = 191) | 78.0 | |
| No codes registered (n = 26) | 96.2 | |
| No gold standard (n = 17) | 0 | |
| Total (n = 234, including cases without codes registered) | 74.4 |
Notes: The raw % correct in the sample is shown by specialism and in total, showing cases with only one diagnosis option (“data quality check”), cases without a gold standard, and cases without any diagnostic information as separate strata. Kappa provides a measure for chance-adjusted agreement for cases with at least two options.