| Literature DB >> 33216746 |
Auss Abbood1, Alexander Ullrich1, Rüdiger Busche2,3, Stéphane Ghozzi1,4.
Abstract
According to the World Health Organization (WHO), around 60% of all outbreaks are detected using informal sources. In many public health institutes, including the WHO and the Robert Koch Institute (RKI), dedicated groups of public health agents sift through numerous articles and newsletters to detect relevant events. This media screening is one important part of event-based surveillance (EBS). Reading the articles, discussing their relevance, and putting key information into a database is a time-consuming process. To support EBS, but also to gain insights into what makes an article and the event it describes relevant, we developed a natural language processing framework for automated information extraction and relevance scoring. First, we scraped relevant sources for EBS as done at the RKI (WHO Disease Outbreak News and ProMED) and automatically extracted the articles' key data: disease, country, date, and confirmed-case count. For this, we performed named entity recognition in two steps: EpiTator, an open-source epidemiological annotation tool, suggested many different possibilities for each. We extracted the key country and disease using a heuristic with good results. We trained a naive Bayes classifier to find the key date and confirmed-case count, using the RKI's EBS database as labels which performed modestly. Then, for relevance scoring, we defined two classes to which any article might belong: The article is relevant if it is in the EBS database and irrelevant otherwise. We compared the performance of different classifiers, using bag-of-words, document and word embeddings. The best classifier, a logistic regression, achieved a sensitivity of 0.82 and an index balanced accuracy of 0.61. Finally, we integrated these functionalities into a web application called EventEpi where relevant sources are automatically analyzed and put into a database. The user can also provide any URL or text, that will be analyzed in the same way and added to the database. Each of these steps could be improved, in particular with larger labeled datasets and fine-tuning of the learning algorithms. The overall framework, however, works already well and can be used in production, promising improvements in EBS. The source code and data are publicly available under open licenses.Entities:
Mesh:
Year: 2020 PMID: 33216746 PMCID: PMC7717563 DOI: 10.1371/journal.pcbi.1008277
Source DB: PubMed Journal: PLoS Comput Biol ISSN: 1553-734X Impact factor: 4.475
Fig 1An illustration of the EventEpi architecture.
The orange part of the plot describes the relevance scoring of epidemiological texts vectorized with word embeddings (created with word2vec), document embeddings (mean over word embeddings), and bag-of-words, and fed to different classification algorithms (support vector machine (SVM), k-nearest neighbor (kNN) and logistic regression (LR) among others). The part of EventEpi that extracts the key information is colored in blue. Key information extraction is trained on sentences containing named entities using a naive Bayes classifier or the most-frequent approach applied to the output of EpiTator, a epidemiological annotation software. The workflow ends with the results being saved into the EventEpi database that is embedded into EventEpi’s web application.
Evaluation of the key date extraction.
| Pre. | Sen. | Spec. | F1 | IBA | |||
|---|---|---|---|---|---|---|---|
| 0.78 | 0.65 | 0.54 | 27 | 54 | |||
| 0.67 | 27 | 54 |
For each classifier and label, the precision (Pre.), sensitivity (Sen.), specificity (Spec.), F1, index balanced accuracy (IBA) with α = 0.1, and sample size for both classes, key and not key, of the test set is given. The best values for each score highlighted in bold.
Evaluation of the key confirmed-case count extraction.
| Pre. | Sen. | Spec. | F1 | IBA | |||
|---|---|---|---|---|---|---|---|
| 0.45 | 0.40 | 89 | 874 | ||||
| 0.20 | 0.67 | 0.32 | 89 | 874 |
Definitions and parameters are the same as in Table 1. The best values for each score highlighted in bold.
The performance evaluation of the relevance classification.
| Pre. | Sen. | Spec. | F1 | IBA | |||
|---|---|---|---|---|---|---|---|
| 0.42 | 0.37 | 38 | 771 | ||||
| 0.19 | 0.61 | 0.87 | 0.51 | 38 | 771 | ||
| 0.14 | 0.75 | 0.24 | 38 | 771 | |||
| 0.12 | 0.63 | 0.77 | 0.20 | 0.48 | 38 | 771 | |
| 0.13 | 0.79 | 0.74 | 0.22 | 0.59 | 38 | 771 | |
| 0.42 | 0.37 | 38 | 771 | ||||
| 0.14 | 0.55 | 0.78 | 0.23 | 0.42 | 42 | 606 |
For each classifier and label, the precision (Pre.), sensitivity (Sen.), specificity (Spec.), F1, index balanced accuracy (IBA) with α = 0.1, and sample size of both classes, relevant and irrelevant articles, of the test set is given. The best values for each score are highlighted in bold.
Fig 2A layer-wise relevance propagation of the CNN for relevance classification.
This text was correctly classified as relevant. Words that are highlighted in red contributed to the classification of the article being relevant and blue words contradicted this classification. The saturation of the color indicates the strength of which the single words contributed to the classification.
Fig 3A screenshot of the EventEpi web application.
The top input text field receives an URL. This URL is summarized if the SUMMARIZE button is pushed. The result of this summary is entered into the datatable, which is displayed as a table. The buttons Get WHO DONs and Get Promed Articles automatically scrape the last articles form both platforms that are not yet in the datatable. Furthermore, the user can search for words in the search text field and download the datatable as CSV, Excel or PDF.