Literature DB >> 34013150

Deep learning detects and visualizes bleeding events in electronic health records.

Jannik S Pedersen1, Martin S Laursen1, Thiusius Rajeeth Savarimuthu1, Rasmus Søgaard Hansen2, Anne Bryde Alnor2, Kristian Voss Bjerre2, Ina Mathilde Kjær3, Charlotte Gils2, Anne-Sofie Faarvang Thorsen2, Eline Sandvig Andersen3, Cathrine Brødsgaard Nielsen4, Lou-Ann Christensen Andersen5, Søren Andreas Just6, Pernille Just Vinholt2.   

Abstract

BACKGROUND: Bleeding is associated with a significantly increased morbidity and mortality. Bleeding events are often described in the unstructured text of electronic health records, which makes them difficult to identify by manual inspection.
OBJECTIVES: To develop a deep learning model that detects and visualizes bleeding events in electronic health records. PATIENTS/
METHODS: Three hundred electronic health records with International Classification of Diseases, Tenth Revision diagnosis codes for bleeding or leukemia were extracted. Each sentence in the electronic health record was annotated as positive or negative for bleeding. The annotated sentences were used to develop a deep learning model that detects bleeding at sentence and note level.
RESULTS: On a balanced test set of 1178   sentences, the best-performing deep learning model achieved a sensitivity of 0.90, specificity of 0.90, and negative predictive value of 0.90. On a test set consisting of 700 notes, of which 49 were positive for bleeding, the model achieved a note-level sensitivity of 1.00, specificity of 0.52, and negative predictive value of 1.00. By using a sentence-level model on a note level, the model can explain its predictions by visualizing the exact sentence in a note that contains information regarding bleeding. Moreover, we found that the model performed consistently well across different types of bleedings.
CONCLUSIONS: A deep learning model can be used to detect and visualize bleeding events in the free text of electronic health records. The deep learning model can thus facilitate systematic assessment of bleeding risk, and thereby optimize patient care and safety.
© 2021 The Authors. Research and Practice in Thrombosis and Haemostasis published by Wiley Periodicals LLC on behalf of International Society on Thrombosis and Haemostasis (ISTH).

Entities:  

Keywords:  decision support systems (clinical); deep learning; electronic health record; hemorrhage; international classification of diseases; machine learning

Year:  2021        PMID: 34013150      PMCID: PMC8114029          DOI: 10.1002/rth2.12505

Source DB:  PubMed          Journal:  Res Pract Thromb Haemost        ISSN: 2475-0379


Bleeding events are difficult to locate in electronic health records. A deep learning model detects bleeding events and visualizes them to the clinicians. The model identified 90.0% of bleeding‐positive sentences and 89.6% of negative sentences The model identified 100% of bleeding‐positive notes and 52.4% of negative notes.

INTRODUCTION

Bleeding occurs in 3.2% of medical patients within 14 days of admission, and approximately one‐third of the bleeding events are considered major events. Bleeding is associated with a significantly increased morbidity and mortality. , Furthermore, previous clinically relevant bleeding events are a strong independent risk factor for future bleeding. Hence, knowledge about bleeding history is essential for providing optimal care to patients. In clinical practice, bleeding risk can be assessed using bleeding risk scores that include information about the patient's bleeding history, for example the HAS‐BLED (hypertension, abnormal renal and liver function, stroke, bleeding, labile international normalized ratio, elderly, drugs or alcohol) score, which is recommended for determining bleeding risk during anticoagulation treatment, , or the IMPROVE (International Medical Prevention Registry on Venous Thromboembolism) score, which is recommended to guide prophylactic anticoagulant treatment for adult medical patients at admission. Although crucial for patient care, bleeding risk is not always systematically evaluated. Studies have shown that a large proportion of hospitalized medical patients do not get appropriate prophylactic anticoagulant treatment during admission. , , One reason is that the recommended scoring systems for assessment of thrombosis and bleeding risk are not always used in clinical practice. , , This could be caused by the fact that risk scores are laborious to obtain because it requires manual work to go through the electronic health record (EHR) for relevant information and that it must be done at the time of admission when health care professionals are busy handling the acute situation. In recent years, deep learning techniques have achieved state‐of‐the‐art performance on text classification benchmarks. In medicine, various deep learning techniques have been used for text classification including, but not limited to, recurrent neural networks (RNNs), , , convolutional neural networks (CNNs), , and hybrid models combining more than one technique. These techniques have the potential for automatic detection of relevant clinical information in EHR text. This could facilitate the systematic assessment of bleeding risk and thereby optimize patient care and safety as well as freeing up time for health care professionals. To date, only a few studies have used deep learning for finding bleeding events in EHRs. , , A general concern about deep learning is how the models reach their conclusions. It often remains a black box, making the users struggle to assess the basis for results or whether the model answers the questions for which clinicians want assistance. , Therefore, there is a growing awareness that deep learning models need to be self‐explanatory. For text classification models, it means that it is relevant to show the prediction‐supporting part of the text upon request. However, such approaches are lacking in bleeding detection models. Therefore, the purpose of this study was to establish a deep learning model that automatically detects bleeding events on a sentence level and to visualize the bleeding events to the clinician in the unstructured EHR text.

METHODS

Population and data set

Data were acquired from the EHR system of the Region of Southern Denmark. To ensure inclusion of EHR notes with a high likelihood of bleeding events in the text, we extracted EHRs from 300 patients with International Classification of Diseases, Tenth Revision (ICD‐10) diagnosis codes for bleeding or leukemia. ICD‐10 codes for bleeding from the following sites were included: eyes, ear‐nose‐throat and respiratory tract, gastrointestinal, urogenital, internal organs, hematoma, and others. EHRs from patients with leukemia were included, as this patient group has a high incidence of bleeding (see Appendix S1 for ICD‐10 codes). Before annotation, we discarded administrative notes, as they would not contain any bleeding events. Twelve physicians annotated the 300 EHRs. Each EHR was annotated by one physician. To determine the agreement between physicians’ annotation, we calculated the kappa score on a sample of 1328 sentences from randomly chosen EHRs. The EHRs were annotated on sentence level with two different labels: Positive: Sentences that indicate any kind of bleeding. Misinterpretable negative: Sentences that were deemed by the annotator to have a high risk of being misinterpreted by the deep learning model, for example, “The patient is not bleeding.” All sentences left after annotation of positive and misinterpretable negative sentences were then considered negative sentences. We chose to annotate the misinterpretable negative sentences as a subcategory to the negative category to be able to feed many negative samples that resemble positive samples to the model. This should help the model distinguish for example “the patient has a bleeding” from “the patient might have a bleeding.” Data were split into a balanced training (80%), validation (10%), and test set (10%) using subsampling of the overrepresented class. The negative sentences consisted of 50% random negatives and 50% misinterpretable negatives. The training set was used to train the models, the validation set was used to tune parameters of the models during training, and the test set was used to evaluate final performance. Sentences were tokenized using the Stanza sentence tokenizer. Samples were preprocessed by elimination of superfluous spaces, special characters, and duplicate sentences.

Models for detection of bleeding events on sentence level

Rule‐based classifier

A rule‐based classifier was developed to compare the deep learning models with a traditional approach to text classification. The rule‐based model was constructed by defining a set of bleeding‐indicating words and modifiers using corpus statistics and manual inspection of the data. Corpus statistics were used to calculate the most frequent words in bleeding‐indicating sentences. A bleeding‐indicating word could for example be bleeding, and a modifier could, for example, be no (bleeding). Next, by evaluating performance on the training data, a window size was defined where a modifier could modify a bleeding‐indicating word. For example, no would modify bleeding in “no sign of bleeding” for a window size of 3. The model uses the indicating and modifying words and the window size to create rules for classifying individual sentences. The rules were iteratively updated during training to improve performance.

Deep learning models

Three different deep learning models were developed: a CNN model, an RNN model, and a hybrid model combining an RNN and a CNN. In deep learning, a model transforms the input to a classification via many layers of processing steps that are learned from labeled data during training. The input to the models is the individual words from each sentence represented as word embeddings. Word embeddings are numerical vector representations of words that encode their meaning with similar words having similar vectors. For this study, 100‐dimensional GloVe word embeddings pretrained on 323 122 Danish EHRs were used. ,

Evaluation of internal validity

We performed an internal sensitivity analysis on the best‐performing model to evaluate if it performs equally well on the seven patient groups included in the study.

Bleeding detection on note level

Because each note may contain multiple positive sentences that often describe the same bleeding event, we calculated the performance of the best model on a note level by classifying all sentences of each note. A positive note is defined as a note that includes at least one bleeding‐positive sentence. The test was performed on seven randomly selected EHRs from patients in the leukemia group not included in the original data set. A total of 100 notes per EHR were collected.

Visualization of bleeding events in EHR text

Finally, we present how the bleeding‐positive output of the model can be presented to the physician as a visualization of complete notes with the bleeding events highlighted, helping the physician understand the prediction and decreasing the time needed to find a bleeding event in an EHR.

Statistical analysis

We calculated accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and a harmonic mean of sensitivity and positive predictive value (F1) score. For each model, we plotted receiver operating characteristic curves and calculated area under the receiver operating characteristic curve (AUC). The models were developed in Python 3.6 (Python Software Foundation, Wilmington, DE, USA) using the Tensorflow 2.0 framework.

RESULTS

The 300 extracted EHRs contained 88 477 notes. Of those, we filtered out 43 602 as administrative notes. The remaining 44 875 EHR notes were annotated on a sentence level. In total, 6111 sentences were annotated as positive and 5630 as misinterpretable negative. Overall, 3973 notes contained bleeding events and there were 1 to 19 positive sentences per note. The EHRs contained 0 to 108 notes with bleeding per patient. Among the different patient groups, “gastrointestinal bleeding” had the highest average number of positive sentences per EHR (n = 25) while “Hematomas and other bleedings” had the lowest (n = 8; see Table 1). Although the EHRs were extracted on the basis of ICD‐10 codes for bleeding, 13 EHRs did not contain any information about bleeding (“internal bleedings,” n = 2; “eyes,” n = 5; and “hematomas and others,” n = 6). Another 5 EHRs from leukemia patients did not contain bleeding events.
TABLE 1

Patient group distribution of extracted EHRs

Patient groupNumber of EHRsNumber of EHR notesNumber of positive EHR notesNumber of positive sentencesAverage number of positive sentences per EHR
Eye bleeding657781771154624
Ear‐nose‐throat and respiratory tract bleeding23370237253223
Gastrointestinal bleeding5169681,0551,25025
Urogenital bleeding45440949985519
Internal organ bleeding4560787531,08224
Hematoma and other bleeding3855972293198
Leukemia bleeding3310 34029452716
Total30044 87539736111

Abbreviation: EHR, electronic health record.

Patient group distribution of extracted EHRs Abbreviation: EHR, electronic health record. When assessing agreement among the 12 physicians, they achieved a kappa score of 0.75 on a sample of 1328 sentences from randomly chosen EHRs. This is considered a substantial agreement.

Establishing models

For development of models, we removed duplicate sentences (n = 218), resulting in 5893 positive samples. To create a balanced data set, we randomly subsampled 2947 misinterpretable negative sentences. These were added to 2946 randomly extracted negative samples to give 5893 total negative samples. Together with the 5893 positive samples, they constitute the balanced data set of 11 786 samples. The balanced data set was divided into training (n = 9430), validation (n = 1178), and test sets (n = 1178). The distribution is seen in Table S1.

Rule‐based results

The bleeding‐indicating and modifying words were aggregated into a stem to capture different conjugations; for example, the Danish word for hemorrhage (hæmoragi) was aggregated to hæm and defined as a bleeding‐indicating word. The developed rule‐based classification model searched each sentence for a positive word. If no positive words were found, the sentence was classified as negative. If a positive word was found, the model searched its context words in a window of size 4 to look for negative modifiers. If a word from the positive list was not accompanied by a negative modifier, the sample was classified as positive. If all positive words were accompanied by negative modifiers, the sample was classified as negative.

Deep learning

The developed CNN consisted of convolutional layers that extract information from neighboring words. The extracted information was used by a linear classification layer that classifies the sentence as either bleeding present or bleeding absent. Our RNN model was based on the Bidirectional Gated Recurrent Unit (BiGRU). The model consisted of a single BiGRU layer that extracts information from the input words by processing them sequentially. The extracted information was used by a linear classification layer that classifies the sentence. The hybrid model used the output from both a CNN and an RNN to classify the sentences. This model was developed to exploit the information extracted from both a CNN and RNN in a final linear classification layer. A more thorough description of the models can be seen in Appendix S2. For each deep learning model, the seven versions of the model that performed best on the validation set were selected for an ensemble classifier. The ensemble classifier averages the predictions of each model to a final prediction.

Performance of models for bleeding detection in EHRs on sentence level

Table 2 shows the performance of the rule‐based and deep learning classifiers on the test set.
TABLE 2

Performance of models for detecting bleeding in electronic health records on sentence level

Rule‐basedCNNRNNHybrid
Accuracy0.800.890.890.90
Sensitivity0.860.900.890.90
Specificity0.720.890.880.90
Positive predictive value0.760.890.880.90
Negative predictive value0.840.900.900.90
F1 score0.810.890.890.90
AUC0.790.890.890.90

Abbreviations: AUC, area under the receiver operating characteristic curve; CNN, convolutional neural network; F1, harmonic mean of sensitivity and positive predictive value; RNN, recurrent neural network.

Performance of models for detecting bleeding in electronic health records on sentence level Abbreviations: AUC, area under the receiver operating characteristic curve; CNN, convolutional neural network; F1, harmonic mean of sensitivity and positive predictive value; RNN, recurrent neural network. Figure 1 shows the ROC curves of the hybrid, CNN, RNN, and rule‐based models with their corresponding AUC.
FIGURE 1

ROC curves and AUC for all models on sentence level. (A) Hybrid model. (B) CNN model. (C) RNN model. (D) Rule‐based model. AUC, area under the curve; CNN, convolutional neural network; RNN, recurrent neural network; ROC, receiver operating characteristic

ROC curves and AUC for all models on sentence level. (A) Hybrid model. (B) CNN model. (C) RNN model. (D) Rule‐based model. AUC, area under the curve; CNN, convolutional neural network; RNN, recurrent neural network; ROC, receiver operating characteristic Overall, the performance of the hybrid model was the best. It achieved an F1 score of 0.90, a sensitivity of 0.90, a specificity of 0.90, a PPV of 0.90, and an NPV of 0.90. The CNN model achieved equally high sensitivity of 0.90 but performed slightly worse on the additional metrics, while the RNN performed consistently worse on all metrics against both the hybrid and CNN model. The rule‐based model performed worse than all deep learning models.

Test of internal validity

We evaluated the hybrid model’s performance within each of the different patient groups on sentence level (Figure 2). It was calculated on the full data set including training, validation, and test sets. The model shows an almost equal performance for all patient groups, highest for “eyes” at 0.98, and lowest for “leukemia” at 0.95.
FIGURE 2

Internal validity for detection of bleeding on note level for the hybrid model

Internal validity for detection of bleeding on note level for the hybrid model

Performance of the hybrid model on note level

We further tested the performance of the hybrid model on a note level by classifying all sentences and aggregating the result to the full note. The seven EHRs contained 700 notes, of which 49 were positive. The hybrid model achieved a sensitivity of 1.00, a specificity of 0.52, a PPV of 0.14, an NPV of 1.00, an F1 score of 0.24, and an AUC of 0.76. In this study, we chose to use the sentence‐level model on a note level because it makes the model capable of explaining its predictions. The model outputs all notes with predicted bleeding events, highlighting the sentence(s) found to indicate bleeding (representative example translated to English in Figure 3, original in Figure S1).
FIGURE 3

Example of the visualization of bleeding events in an electronic health record note. To keep the original format, the text is translated directly from Danish to English, which results in incorrect sentence structures

Example of the visualization of bleeding events in an electronic health record note. To keep the original format, the text is translated directly from Danish to English, which results in incorrect sentence structures

DISCUSSION

We present a deep learning model that automatically detects bleeding events in EHRs with a sensitivity of 0.90 on sentence level and 1.00 on note level. This enables clinicians to receive automatic visualization of EHR notes with bleeding events. The hybrid model, combining an RNN and a CNN, performed best for bleeding detection on sentence level (F1 = 0.90). In congruence with our study, others have found that machine learning can be used for finding bleeding in EHRs. Rumeng et al. used a deep learning model to detect bleeding events in sentences of EHRs (F1 = 0.94). The study comprised a data set of 2902 sentences extracted from 878 notes from patients with cardiovascular events. Taggart et al. detected bleeding events at a note level with a rule‐based approach (F1 = 0.74) and a CNN (F1 = 0.40) on a test set consisting of 660 notes. The rule‐based model was trained on 990 notes and the CNN was trained on 450 notes. In contrast to our study, Taggart et al. found that their rule‐based approach performed better than their CNN but it may, however, be due to the limited amount of training data for the CNN, which is a well‐known limitation in machine learning. , Rumeng et al. also used a small data set, and moreover, the data used were exclusively from patients with cardiovascular events. Therefore, in the above studies, the data sets might not be representative of bleeding in all sites and the model might not be generalizable to other patient groups. The model presented in our study used a data set of 11 786 sentences extracted from 44 875 notes representative for multiple types of bleeding. In the internal validity test, we found that our model generalizes well to different types of bleeding. Lee et al. used a rule‐based (sensitivity = 0.83), machine learning (sensitivity = 1.00), and score function (sensitivity = 0.98) approach to find clopidogrel‐induced bleedings in EHRs. They defined bleeding events as the presence of specific ICD, Ninth Revision (ICD‐9) codes, specific keywords, and unique identifiers of the Unified Medical Language System related to bleeding. Thus, the bleeding definition was simplified to specific words, which is a limitation for use in clinical practice, as bleeding can be reported with numerous different phrases in EHRs. In agreement, we found thousands of different sentences that corresponded to bleeding according to the physicians involved. Moreover, the construction of keyword and rule lists requires manual effort that is difficult to scale because of the unstructured and noisy nature of the clinical notes (eg, grammatical ambiguity, synonyms, term abbreviation, misspelling, or negation of concepts). Additionally, validation of ICD‐9 and ICD‐10 diagnosis codes has shown that they are not always accurate. , However, the major concern is that diagnosis coding requires manual collection of the patient history to choose the codes of relevance and that bleeding events that are not a major contributing cause of admittance are not registered with a code for bleeding. The present study provides an attractive alternative by leveraging the information‐rich yet unstructured text data in clinical notes in EHRs, which are currently often omitted when developing models. In the present approach, we established a deep learning model that points out relevant information in the EHRs on sentence level. The advantage of making a sentence‐level classifier is that it enables the model to explain its predictions on a note level by showing the prediction‐supporting part of the text. We were, therefore, able to visualize where in the notes the model has detected a bleeding event, which enables us to point out relevant sentence(s) in the long unstructured EHR text for the physician. A fast overview of patient bleeding history facilitates clinical decision making. Accordingly, studies have shown that clinical practice may improve when decision support systems give automatic recommendations where the decision is interpretable and understandable for the physician. , An automatic summary of bleeding history may be valuable in clinical practice to diagnose, monitor disease, or address treatment options. The presented approach can be extended to include other symptoms and findings. Information regarding specific past events, for example, bleeding events during medical procedures, is important when planning a new medical procedure. Thus, the information may have an impact on patient safety because, for example, procedure and operation bleeding risk and medication side effects can be monitored effectively. It may also prove useful for health care statistics and resource management. Finally, the approach may save time because a focused review of an EHR to find all past bleeding events is very time consuming. Thus, it provides more time for direct patient care. To summarize the main points of the discussion, comparing the related studies, the current study used the largest annotated corpus, providing an advantage to the deep learning model. This study also included many different types of bleeding, and it evaluated model accuracy by type of bleeding. In contrast to Taggart et al., we found that a deep learning approach works better than a rule‐based approach. We additionally show a simple approach to visualizing the sentences indicating bleeding to physicians, allowing for interpretation of the deep learning model.

Limitations

The rule‐based algorithm may have been further optimized by being more specific on search terms with inclusion of more words and their common misspellings instead of using more global terms to group words; for example, the Danish stem hæm may find words with various meanings that do not imply bleeding. Another limitation is that the study included only EHRs with an ICD‐10 code of bleeding, which does not capture all EHRs with bleeding events. Additionally, we did not validate the algorithm on an independent cohort. Of note, we found a high sensitivity for bleeding in EHRs from patients with leukemia, who comprise a patient group experiencing bleeding from different organ systems. It thus suggests that the model performs well on EHRs without ICD‐10 for bleeding. It is crucial that the text that we used for training the model is representative for any way that bleeding can be reported in the EHR. It is a limitation to the study that we cannot guarantee this, and it would be beneficial to include a larger and more general data set. Nevertheless, this approach clearly showed that it is feasible to automatically extract and visualize bleeding events in EHRs. Future research shall focus on developing a model on data including even more bleeding types and optimizing the strategy, which includes differentiation between clinically relevant versus trivial bleedings and surgical versus medical bleeding.

CONCLUSION

We have developed a deep learning model that identifies bleeding events in EHRs with a sensitivity of 0.90 on sentence level and 1.00 on note level. Further, we have shown how bleeding‐positive notes can be visualized to physicians, making the model easily interpretable to the clinician.

AUTHOR CONTRIBUTIONS

MSL and JSP contributed equally in the analysis of data and production of results. PJV, TRS, and SAJ contributed to the design and conception of the study. PJV, RSH, ABA, KVB, IMK, CG, AFT, ESA, CBN, and LCA annotated the electronic health records.

RELATIONSHIP DISCLOSURE

The authors declare no conflicts of interest. Supplementary Material Click here for additional data file. Figure S1 Click here for additional data file. Figure S2 Click here for additional data file. Figure S3 Click here for additional data file. Figure S4 Click here for additional data file.
  24 in total

1.  Factors at admission associated with bleeding risk in medical patients: findings from the IMPROVE investigators.

Authors:  Hervé Decousus; Victor F Tapson; Jean-François Bergmann; Beng H Chong; James B Froehlich; Ajay K Kakkar; Geno J Merli; Manuel Monreal; Mashio Nakamura; Ricardo Pavanello; Mario Pini; Franco Piovella; Frederick A Spencer; Alex C Spyropoulos; Alexander G G Turpie; Rainer B Zotz; Gordon Fitzgerald; Frederick A Anderson
Journal:  Chest       Date:  2010-05-07       Impact factor: 9.410

2.  A novel user-friendly score (HAS-BLED) to assess 1-year risk of major bleeding in patients with atrial fibrillation: the Euro Heart Survey.

Authors:  Ron Pisters; Deirdre A Lane; Robby Nieuwlaat; Cees B de Vos; Harry J G M Crijns; Gregory Y H Lip
Journal:  Chest       Date:  2010-03-18       Impact factor: 9.410

3.  Medical Text Classification Using Convolutional Neural Networks.

Authors:  Mark Hughes; Irene Li; Spyros Kotoulas; Toyotaro Suzumura
Journal:  Stud Health Technol Inform       Date:  2017

4.  ISTH/SSC bleeding assessment tool: a standardized questionnaire and a proposal for a new bleeding score for inherited bleeding disorders.

Authors:  F Rodeghiero; A Tosetto; T Abshire; D M Arnold; B Coller; P James; C Neunert; D Lillicrap
Journal:  J Thromb Haemost       Date:  2010-09       Impact factor: 5.824

5.  Should Health Care Demand Interpretable Artificial Intelligence or Accept "Black Box" Medicine?

Authors:  Fei Wang; Rainu Kaushal; Dhruv Khullar
Journal:  Ann Intern Med       Date:  2019-12-17       Impact factor: 25.391

6.  Bleeding, mortality, and antiplatelet therapy: results from the Clopidogrel for High Atherothrombotic Risk and Ischemic Stabilization, Management, and Avoidance (CHARISMA) trial.

Authors:  Jeffrey S Berger; Deepak L Bhatt; P Gabriel Steg; Steven R Steinhubl; Gilles Montalescot; Mingyuan Shao; Werner Hacke; Keith A Fox; Peter B Berger; Eric J Topol; A Michael Lincoff
Journal:  Am Heart J       Date:  2011-07       Impact factor: 4.749

7.  Appropriate thromboprophylaxis in hospitalized cancer patients.

Authors:  Alpesh Amin; Stephen Stemkowski; Jay Lin; Guiping Yang
Journal:  Clin Adv Hematol Oncol       Date:  2008-12

Review 8.  A guide to deep learning in healthcare.

Authors:  Andre Esteva; Alexandre Robicquet; Bharath Ramsundar; Volodymyr Kuleshov; Mark DePristo; Katherine Chou; Claire Cui; Greg Corrado; Sebastian Thrun; Jeff Dean
Journal:  Nat Med       Date:  2019-01-07       Impact factor: 53.440

9.  Validation study in four health-care databases: upper gastrointestinal bleeding misclassification affects precision but not magnitude of drug-related upper gastrointestinal bleeding risk.

Authors:  Vera E Valkhoff; Preciosa M Coloma; Gwen M C Masclee; Rosa Gini; Francesco Innocenti; Francesco Lapi; Mariam Molokhia; Mees Mosseveld; Malene Schou Nielsson; Martijn Schuemie; Frantz Thiessard; Johan van der Lei; Miriam C J M Sturkenboom; Gianluca Trifirò
Journal:  J Clin Epidemiol       Date:  2014-05-01       Impact factor: 6.437

10.  Detection of Bleeding Events in Electronic Health Record Notes Using Convolutional Neural Network Models Enhanced With Recurrent Neural Network Autoencoders: Deep Learning Approach.

Authors:  Rumeng Li; Baotian Hu; Feifan Liu; Weisong Liu; Francesca Cunningham; David D McManus; Hong Yu
Journal:  JMIR Med Inform       Date:  2019-02-08
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.