Next generation community assessment of biomedical entity recognition web servers: metrics, performance, interoperability aspects of BeCalm
FECHA:
2019-06-24
IDENTIFICADOR UNIVERSAL: http://hdl.handle.net/11093/4072
MATERIA UNESCO: 1203.12 Bancos de Datos ; 1203.17 Informática ; 2499 Otras Especialidades Biológicas
TIPO DE DOCUMENTO: article
RESUMEN
Background: Shared tasks and community challenges represent key instruments to promote research, collaboration
and determine the state of the art of biomedical and chemical text mining technologies. Traditionally, such tasks
relied on the comparison of automatically generated results against a so-called Gold Standard dataset of manually
labelled textual data, regardless of efficiency and robustness of the underlying implementations. Due to the rapid
growth of unstructured data collections, including patent databases and particularly the scientific literature, there is a
pressing need to generate, assess and expose robust big data text mining solutions to semantically enrich documents
in real time. To address this pressing need, a novel track called “Technical interoperability and performance of annotation
servers” was launched under the umbrella of the BioCreative text mining evaluation effort. The aim of this track
was to enable the continuous assessment of technical aspects of text annotation web servers, specifically of online
biomedical named entity recognition systems of interest for medicinal chemistry applications.
Results: A total of 15 out of 26 registered teams successfully implemented online annotation servers. They returned
predictions during a two-month period in predefined formats and were evaluated through the BeCalm evaluation
platform, specifically developed for this track. The track encompassed three levels of evaluation, i.e. data format
considerations, technical metrics and functional specifications. Participating annotation servers were implemented
in seven different programming languages and covered 12 general entity types. The continuous evaluation of server
responses accounted for testing periods of low activity and moderate to high activity, encompassing overall 4,092,502
requests from three different document provider settings. The median response time was below 3.74 s, with a median
of 10 annotations/document. Most of the servers showed great reliability and stability, being able to process over
100,000 requests in a 5-day period.
Conclusions: The presented track was a novel experimental task that systematically evaluated the technical performance
aspects of online entity recognition systems. It raised the interest of a significant number of participants.
Future editions of the competition will address the ability to process documents in bulk as well as to annotate full-text
documents.