| Literature DB >> 30850898 |
Sune Pletscher-Frankild1,2, Lars Juhl Jensen3.
Abstract
Most BioCreative tasks to date have focused on assessing the quality of text-mining annotations in terms of precision and recall. Interoperability, speed, and stability are, however, other important factors to consider for practical applications of text mining. For about a decade, we have run named entity recognition (NER) web services, which are designed to be efficient, implemented using a multi-threaded queueing system to robustly handle many simultaneous requests, and hosted at a supercomputer facility. To participate in this new task, we extended the existing NER tagging service with support for the BeCalm API. The tagger suffered no downtime during the challenge and, as in earlier tests, proved to be highly efficient, consistently processing requests of 5000 abstracts in less than half a minute. In fact, the majority of this time was spent not on the NER task but rather on retrieving the document texts from the challenge servers. The latter was found to be the main bottleneck even when hosting a copy of the tagging service on a Raspberry Pi 3, showing that local document storage or caching would be desirable features to include in future revisions of the API standard.Entities:
Keywords: Named entity recognition; Text mining; Web services
Year: 2019 PMID: 30850898 PMCID: PMC6419787 DOI: 10.1186/s13321-019-0344-9
Source DB: PubMed Journal: J Cheminform ISSN: 1758-2946 Impact factor: 5.514
Performance of the taggers
| # Documents | Tagger: total time (s) | PiTagger: total time (s) | ||
|---|---|---|---|---|
| Abstract server | Patent server | Abstract server | Patent server | |
| 1 | 0.87 ± 0.32 | 0.84 ± 0.32 | 0.75 ± 0.34 | 0.84 ± 0.24 |
| 10 | 0.98 ± 0.30 | 0.83 ± 0.26 | 1.10 ± 0.28 | 0.87 ± 0.37 |
| 100 | 1.89 ± 0.29 | 1.48 ± 0.27 | 2.34 ± 0.33 | 1.52 ± 0.34 |
| 1000 | 11.31 ± 0.68 | 6.02 ± 0.37 | 15.23 ± 0.48 | 8.89 ± 0.48 |
| 5000 | 52.18 ± 2.76 | 26.67 ± 1.16 | 72.73 ± 1.81 | 40.83 ± 1.01 |
For small requests the total turnaround time is ~ 1 s. Larger requests take an extra 5–10 s per 1000 abstracts to be processed on Tagger. Notably, most of this time is spent on retrieving the document texts from document identifiers, whereas the actual NER step takes only about 20% of the total time. This is reflected in the fact that the PiTagger, which runs on a Raspberry Pi 3, takes only about 50% longer to process large requests