| Literature DB >> 33870071 |
Jiayuan He1,2, Dat Quoc Nguyen1,3, Saber A Akhondi4, Christian Druckenbrodt5, Camilo Thorne5, Ralph Hoessel5, Zubair Afzal4, Zenan Zhai1, Biaoyan Fang1, Hiyori Yoshikawa1,6, Ameer Albahem1,2, Lawrence Cavedon2, Trevor Cohn1, Timothy Baldwin1, Karin Verspoor1,2.
Abstract
Chemical patents represent a valuable source of information about new chemical compounds, which is critical to the drug discovery process. Automated information extraction over chemical patents is, however, a challenging task due to the large volume of existing patents and the complex linguistic properties of chemical patents. The Cheminformatics Elsevier Melbourne University (ChEMU) evaluation lab 2020, part of the Conference and Labs of the Evaluation Forum 2020 (CLEF2020), was introduced to support the development of advanced text mining techniques for chemical patents. The ChEMU 2020 lab proposed two fundamental information extraction tasks focusing on chemical reaction processes described in chemical patents: (1) chemical named entity recognition, requiring identification of essential chemical entities and their roles in chemical reactions, as well as reaction conditions; and (2) event extraction, which aims at identification of event steps relating the entities involved in chemical reactions. The ChEMU 2020 lab received 37 team registrations and 46 runs. Overall, the performance of submissions for these tasks exceeded our expectations, with the top systems outperforming strong baselines. We further show the methods to be robust to variations in sampling of the test data. We provide a detailed overview of the ChEMU 2020 corpus and its annotation, showing that inter-annotator agreement is very strong. We also present the methods adopted by participants, provide a detailed analysis of their performance, and carefully consider the potential impact of data leakage on interpretation of the results. The ChEMU 2020 Lab has shown the viability of automated methods to support information extraction of key information in chemical patents.Entities:
Keywords: chemical reactions; cheminformatics; event extraction; information extraction; named entity recognition; patent text mining
Year: 2021 PMID: 33870071 PMCID: PMC8028406 DOI: 10.3389/frma.2021.654438
Source DB: PubMed Journal: Front Res Metr Anal ISSN: 2504-0537
Statistics of the selected snippets.
| # Sentence | 7,402 | 1 | 46 | 4 | 5.0 |
| # Word | 252,459 | 35 | 1,275 | 157 | 168.3 |
Figure 1An example of one patent snippet in ChEMU chemical reaction corpus.
Definitions of entity, trigger word, and relation types, i.e., labels.
| STARTING_MATERIAL | A substance that is consumed in the course of a chemical reaction providing atoms to products is considered as starting material. |
| REAGENT_CATALYST | A reagent is a compound added to a system to cause or help with a chemical reaction. |
| REACTION_PRODUCT | A product is a substance that is formed during a chemical reaction. |
| SOLVENT | A solvent is a chemical entity that dissolves a solute resulting in a solution. |
| OTHER_COMPOUND | Other chemical compounds that are not the products, starting materials, reagents, catalysts and solvents. |
| TIME | The reaction time of the reaction. |
| TEMPERATURE | The temperature at which the reaction was carried out. |
| YIELD_PERCENT | Yield given in percent values. |
| YIELD_OTHER | Yields provided in other units than %. |
| EXAMPLE_LABEL | A label associated with a reaction specification. |
| REACTION_STEP | An event within which starting materials are converted into the product. |
| WORKUP | An event step which is a manipulation required to isolate and purify the product of a chemical reaction. |
| Arg1 | The relation between an event trigger word and a chemical compound. |
| ArgM | The relation between an event trigger word and a temperature, time, or yield entity. |
Figure 2Visualization of the annotations in the snippet in Figure 1.
Figure 3The annotation file for the patent snippet in Figure 1. Entities and trigger words are indexed from T0 to T16, and relations are indexed from R0 to R10.
Overall statistics of the annotated corpus.
| # Patent snippets | 1,500 |
| # Total entities | 26,857 |
| # Trigger words | 11,236 |
| # Relations | 23,445 |
Number of instances for each label defined in Table 2.
| STARTING_MATERIAL | 2,878 |
| REAGENT_CATALYST | 2,074 |
| REACTION_PRODUCT | 3,413 |
| SOLVENT | 1,818 |
| OTHER_COMPOUND | 7,651 |
| TIME | 1,763 |
| TEMPERATURE | 2,473 |
| YIELD_PERCENT | 1,572 |
| YIELD_OTHER | 1,762 |
| EXAMPLE_LABEL | 1,453 |
| REACTION_STEP | 6,210 |
| WORKUP | 5,026 |
| Arg1 | 15,865 |
| ArgM | 7,580 |
Summary of inter-annotator agreement scores.
| Cohen's Kappa | 0.9070 | 0.9515 | 0.9539 |
| F1 score | 0.9505 | 0.9747 | 0.9760 |
| Cohen's Kappa | 0.6513 | 0.8035 | 0.8068 |
| F1 score | 0.8985 | 0.9496 | 0.9506 |
Figure 4Distributions of entity labels on the training/development/test data splits. The labels are indexed according to their order in Table 4.
Figure 5Distributions of IPCs on the training/development/test data splits. Only dominating IPC groups that take up more than 1% of at least one data split are included in this figure. Other IPCs are grouped as “Other”.
Figure 6Illustration of the three tasks. Shaded text spans represent annotated entities or trigger words. Arrows represent relations between entities.
Figure 7Illustration of the hierarchical NER class structure used in evaluation.
Overall performance of all runs in Task 1—Named entity recognition.
| Melaxtech-run1 | 0.9571 | 0.9690 | ||||
| Melaxtech-run2 | 0.9697 | 0.9637 | 0.9667 | |||
| Melaxtech-run3 | 0.9510 | 0.9541 | 0.9688 | 0.9624 | 0.9656 | |
| VinAI-run2 | 0.9538 | 0.9504 | 0.9521 | |||
| VinAI-run1 | 0.9462 | 0.9405 | 0.9433 | 0.9661 | 0.9684 | |
| Lasige_BioTM-run1 | 0.9327 | 0.9457 | 0.9392 | 0.9590 | 0.9671 | 0.9630 |
| BiTeM-run3 | 0.9378 | 0.9087 | 0.9230 | 0.9692 | 0.9558 | 0.9624 |
| BiTeM-run2 | 0.9083 | 0.9114 | 0.9098 | 0.9510 | 0.9684 | 0.9596 |
| NextMove/Minesoft-run1 | 0.9042 | 0.8924 | 0.8983 | 0.9301 | 0.9181 | 0.9240 |
| NextMove/Minesoft-run2 | 0.9037 | 0.8918 | 0.8977 | 0.9294 | 0.9178 | 0.9236 |
| 0.9071 | 0.8723 | 0.8893 | 0.9219 | 0.8893 | 0.9053 | |
| NLP@VCU-run1 | 0.8747 | 0.8570 | 0.8658 | 0.9524 | 0.9513 | 0.9518 |
| KFU_NLP-run1 | 0.8930 | 0.8386 | 0.8649 | 0.9701 | 0.9255 | 0.9473 |
| NLP@VCU-run2 | 0.8705 | 0.8502 | 0.8602 | 0.9490 | 0.9446 | 0.9468 |
| NLP@VCU-run3 | 0.8665 | 0.8514 | 0.8589 | 0.9486 | 0.9528 | 0.9507 |
| KFU_NLP-run2 | 0.8579 | 0.8329 | 0.8452 | 0.9690 | 0.9395 | 0.9540 |
| NextMove/Minesoft-run3 | 0.8281 | 0.8083 | 0.8181 | 0.8543 | 0.8350 | 0.8445 |
| KFU_NLP-run3 | 0.8197 | 0.8027 | 0.8111 | 0.9579 | 0.9350 | 0.9463 |
| BiTeM-run1 | 0.8330 | 0.7799 | 0.8056 | 0.8882 | 0.8492 | 0.8683 |
| OntoChem-run1 | 0.7927 | 0.5983 | 0.6819 | 0.8441 | 0.6364 | 0.7257 |
| AUKBC-run1 | 0.6763 | 0.4074 | 0.5085 | 0.8793 | 0.5334 | 0.6640 |
| AUKBC-run2 | 0.4895 | 0.1913 | 0.2751 | 0.6686 | 0.2619 | 0.3764 |
| SSN_NLP-run1 | 0.2923 | 0.1911 | 0.2311 | 0.8633 | 0.4930 | 0.6276 |
| SSN_NLP-run2 | 0.2908 | 0.1911 | 0.2307 | 0.8595 | 0.4932 | 0.6267 |
| JU_INDIA-run1 | 0.1411 | 0.0824 | 0.1041 | 0.2522 | 0.1470 | 0.1857 |
| JU_INDIA-run2 | 0.0322 | 0.0151 | 0.0206 | 0.1513 | 0.0710 | 0.0966 |
| JU_INDIA-run3 | 0.0322 | 0.0151 | 0.0206 | 0.1513 | 0.0710 | 0.0966 |
Here, P, R, and F represents the Precision, Recall, and F.
This run was received after evaluation phase and thus was not included in official results.
Overall performance of all runs in Task 1—Named entity recognition where the set of high-level labels in Figure 7 is used.
| Melaxtech-run1 | 0.9906 | 0.9901 | 0.9903 | |||
| Melaxtech-run2 | 0.9910 | 0.9849 | 0.9879 | |||
| Melaxtech-run3 | 0.9775 | 0.9714 | 0.9744 | 0.9905 | 0.9838 | 0.9871 |
| VinAI-run2 | 0.9704 | 0.9670 | 0.9687 | 0.9901 | ||
| Lasige_BioTM-run1 | 0.9571 | 0.9706 | 0.9638 | 0.9886 | ||
| VinAI-run1 | 0.9635 | 0.9579 | 0.9607 | 0.9899 | 0.9854 | 0.9877 |
| 0.9657 | 0.9288 | 0.9469 | 0.9861 | 0.9519 | 0.9687 | |
| BiTeM-run1 | 0.9573 | 0.9277 | 0.9423 | 0.9907 | 0.9770 | 0.9838 |
| NextMove/Minesoft-run2 | 0.9460 | 0.9330 | 0.9394 | 0.9773 | 0.9611 | 0.9691 |
| NextMove/Minesoft-run1 | 0.9458 | 0.9330 | 0.9393 | 0.9773 | 0.9610 | 0.9691 |
| BiTeM-run2 | 0.9323 | 0.9357 | 0.9340 | 0.9845 | ||
| NextMove/Minesoft-run3 | 0.9201 | 0.8970 | 0.9084 | 0.9571 | 0.9308 | 0.9438 |
| NLP@VCU-run1 | 0.9016 | 0.8835 | 0.8925 | 0.9855 | 0.9814 | 0.9834 |
| NLP@VCU-run2 | 0.9007 | 0.8799 | 0.8902 | 0.9882 | 0.9798 | 0.9840 |
| NLP@VCU-run3 | 0.8960 | 0.8805 | 0.8882 | 0.9858 | 0.9869 | 0.9863 |
| KFU_NLP-run1 | 0.9125 | 0.8570 | 0.8839 | 0.9465 | 0.9683 | |
| BiTeM-run3 | 0.9073 | 0.8496 | 0.8775 | 0.9894 | 0.9355 | 0.9617 |
| KFU_NLP-run2 | 0.8735 | 0.8481 | 0.8606 | 0.988 | 0.9569 | 0.9722 |
| KFU_NLP-run3 | 0.8332 | 0.8160 | 0.8245 | 0.9789 | 0.9516 | 0.9651 |
| OntoChem-run1 | 0.9029 | 0.6796 | 0.7755 | 0.9611 | 0.7226 | 0.8249 |
| AUKBC-run1 | 0.7542 | 0.4544 | 0.5671 | 0.9833 | 0.5977 | 0.7435 |
| AUKBC-run2 | 0.6605 | 0.2581 | 0.3712 | 0.9290 | 0.3612 | 0.5201 |
| SSN_NLP-run2 | 0.3174 | 0.2084 | 0.2516 | 0.9491 | 0.5324 | 0.6822 |
| SSN_NLP-run1 | 0.3179 | 0.2076 | 0.2512 | 0.9505 | 0.5304 | 0.6808 |
| JU_INDIA-run1 | 0.2019 | 0.1180 | 0.1489 | 0.5790 | 0.3228 | 0.4145 |
| JU_INDIA-run2 | 0.0557 | 0.0262 | 0.0357 | 0.4780 | 0.2149 | 0.2965 |
| JU_INDIA-run3 | 0.0557 | 0.0262 | 0.0357 | 0.4780 | 0.2149 | 0.2965 |
Here, P, R, and F represents the Precision, Recall, and F.
This run was received after evaluation phase and thus was not included in official results.
Overall performance of all runs in Task 2—Event extraction.
| Melaxtech-run1 | ||||||
| Melaxtech-run2 | 0.9402 | 0.9414 | ||||
| Melaxtech-run3 | 0.9522 | 0.9479 | 0.9534 | 0.9491 | ||
| NextMove/Minesoft-run1 | 0.9441 | 0.8556 | 0.8977 | 0.9441 | 0.8556 | 0.8977 |
| NextMove/Minesoft-run2 | 0.8746 | 0.7816 | 0.8255 | 0.8909 | 0.7983 | 0.8420 |
| BOUN_REX-run1 | 0.7610 | 0.6893 | 0.7234 | 0.7610 | 0.6893 | 0.7234 |
| NLP@VCU-run1 | 0.8056 | 0.5449 | 0.6501 | 0.8059 | 0.5451 | 0.6503 |
| NLP@VCU-run2 | 0.5120 | 0.7153 | 0.5968 | 0.5125 | 0.7160 | 0.5974 |
| NLP@VCU-run3 | 0.5085 | 0.7126 | 0.5935 | 0.5090 | 0.7133 | 0.5941 |
| 0.2431 | 0.8861 | 0.3815 | 0.2431 | 0.8863 | 0.3816 | |
Here, P, R, and F represent the Precision, Recall, and F.
Overall performance of all runs in end-to-end systems.
| Melaxtech-run1 | ||||||
| NextMove/Minesoft-run1 | ||||||
| NextMove/Minesoft-run2 | 0.8486 | 0.7602 | 0.8020 | 0.8653 | 0.7771 | 0.8188 |
| NextMove/Minesoft-run3 | 0.8061 | 0.7207 | 0.7610 | 0.8228 | 0.7371 | 0.7776 |
| OntoChem-run1 | 0.7971 | 0.3777 | 0.5126 | 0.8407 | 0.3984 | 0.5406 |
| OntoChem-run2 | 0.7971 | 0.3777 | 0.5126 | 0.8407 | 0.3984 | 0.5406 |
| OntoChem-run3 | 0.7971 | 0.3777 | 0.5126 | 0.8407 | 0.3984 | 0.5406 |
| 0.2104 | 0.7329 | 0.3270 | 0.2135 | 0.7445 | 0.3319 | |
| Melaxtech-run2 | 0.2394 | 0.2647 | 0.2514 | 0.2429 | 0.2687 | 0.2552 |
| Melaxtech-run3 | 0.2383 | 0.2642 | 0.2506 | 0.2421 | 0.2684 | 0.2545 |
Here, P, R, and F represent the Precision, Recall, and F.
The task participation of the 8 teams BiTeM, VinAI, BOUN-REX, NextMove/Minesoft, NLP@VCU, AU-KBC, LasigBioTM, and Melaxtech.
| BiTeM | ✓ | ||
| VinAI | ✓ | ||
| BOUN-REX | ✓ | ||
| NextMove/Minesoft | ✓ | ✓ | ✓ |
| NLP@VCU | ✓ | ✓ | |
| AU-KBC | ✓ | ||
| LasigBioTM | ✓ | ✓ | ✓ |
| Melaxtech | ✓ | ✓ | ✓ |
Summary of participants' approaches.
| Rule-based | ||||||||
| Dictionary-based | ||||||||
| Subword-based | ||||||||
| Chemistry domain-specific | ||||||||
| Character-level | ||||||||
| Pre-trained | ||||||||
| Chemistry domain-specific | ||||||||
| PoS | ||||||||
| Phrase | ||||||||
| Transformer | ||||||||
| Bi-LSTM | ||||||||
| CNN | ||||||||
| MLP | ||||||||
| CRF | ||||||||
| FSM | ||||||||
| Rule-based | ||||||||
BiTeM (Copara et al., .
Figure 8Confusion matrix of the top system on Task 1—NER. NEG (negative entity): ground-truth (or predicted) entities whose text spans are not annotated as entities in the predicted (or ground-truth) set. This confusion matrix is computed under the scenario of exact span-matching.
Figure 9An example of the system misclassifying STARTING_MATERIAL as REAGENT_CATALYST.
Figure 10An example of misclassification between REACTION_PRODUCT and OTHER_COMPOUND.
Figure 11An example of the system misclassifying REAGENT_CATALYST as SOLVENT.
Figure 12Confusion matrix of the top system for trigger word prediction in Task 2—EE. WU: WORKUP. RS: REACTION_STEP. NEG (negative trigger word): ground-truth (or predicted) trigger words whose text spans are not annotated as trigger words in the predicted (or ground-truth) set. This confusion matrix is computed under the scenario of exact span-matching.
Figure 13An example of misclassifying WORKUP as REACTION_STEP.
Figure 14False positive and false negative examples.
Figure 15An example of errors in relation prediction. Only the ground-truth labels are included in the figure. In terms of relation prediction, the top ranking system misses three relations: (1) synthesized → 0.83 g; (2) synthesized → 0.72 mmol; (3) synthesized → 46%.
Figure 16Number of unique source patents in training, development, and test set. Each number represents the number of unique source patents within the sector partitioned by the gray circle boundaries.
Evaluation results on .
| Melaxtech-run1 | 0.8777 | −0.0793 | 8.3 | 0.8672 | −0.0766 | 8.1 |
| Melaxtech-run2 | 0.8721 | −0.0837 | 8.8 | 0.8521 | −0.0903 | 9.6 |
| Melaxtech-run3 | 0.8777 | −0.0764 | 8.0 | 0.8672 | −0.0732 | 7.8 |
| VinAI-run2 | 0.8696 | −0.0825 | 8.7 | 0.8477 | −0.0932 | 9.9 |
| VinAI-run1 | 0.8644 | −0.0789 | 8.4 | 0.8448 | −0.0903 | 9.7 |
| LasigBioTM-run1 | 0.8525 | −0.0867 | 9.2 | 0.8333 | −0.0859 | 9.3 |
| BiTeM-run3 | 0.8517 | −0.0713 | 7.7 | 0.8238 | −0.0689 | 7.7 |
| BiTeM-run2 | 0.8345 | −0.0753 | 8.3 | 0.8163 | −0.0587 | 6.7 |
| NextMove/Minesoft-run1 | 0.7652 | −0.1331 | 14.8 | 0.7277 | −0.1373 | 15.9 |
| NextMove/Minesoft-run2 | 0.7652 | −0.1325 | 14.8 | 0.7277 | −0.1365 | 15.8 |
| NLP@VCU-run1 | 0.7729 | −0.0929 | 10.7 | 0.7744 | −0.0940 | 10.8 |
| KFU_NLP-run1 | 0.8242 | −0.0407 | 4.7 | 0.8338 | −0.0398 | 4.6 |
| NLP@VCU-run2 | 0.7698 | −0.0904 | 10.5 | 0.7629 | −0.0982 | 11.4 |
| NLP@VCU-run3 | 0.7941 | −0.0648 | 7.5 | 0.8052 | −0.0539 | 6.3 |
| KFU_NLP-run2 | 0.8036 | −0.0416 | 4.9 | 0.8040 | −0.0397 | 4.7 |
| NextMove/Minesoft-run3 | 0.6717 | −0.1464 | 17.9 | 0.6296 | −0.1451 | 18.7 |
| KFU_NLP-run3 | 0.7957 | −0.0154 | 1.9 | 0.8041 | 0.0042 | 0.5 |
| BiTeM-run1 | 0.6984 | −0.1072 | 13.3 | 0.6275 | −0.1074 | 14.6 |
| OntoChem-run1 | 0.5446 | −0.1373 | 20.1 | 0.4615 | −0.1520 | 24.8 |
| AUKBC-run1 | 0.5188 | 0.0103 | 2.0 | 0.5439 | −0.0815 | 13.0 |
| AUKBC-run2 | 0.3654 | 0.0903 | 32.8 | 0.3846 | 0.0253 | 7.0 |
| SSN_NLP-run1 | 0.2203 | −0.0108 | 4.7 | 0.2746 | −0.0080 | 2.8 |
| SSN_NLP-run2 | 0.2159 | −0.0148 | 6.4 | 0.2736 | −0.0090 | 3.2 |
| JU_INDIA-run1 | 0.2393 | 0.1352 | >100 | 0.2963 | 0.1580 | >100 |
| JU_INDIA-run2 | 0.0450 | 0.0244 | >100 | 0.0562 | 0.0265 | 89.2 |
| JU_INDIA-run3 | 0.0450 | 0.0244 | >100 | 0.0562 | 0.0265 | 89.2 |
F: F.
F1 scores on Q2-4.
| Melaxtech-run1 | −0.0481 | 5.0 | 0.0008 | 0.1 | 0.0097 | 1.0 | 0.0578 |
| Melaxtech-run2 | −0.0462 | 4.8 | −0.0013 | 0.1 | 0.0126 | 1.3 | 0.0588 |
| Melaxtech-run3 | −0.0482 | 5.1 | 0.0008 | 0.1 | 0.0094 | 1.0 | 0.0576 |
| VinAI-run2 | −0.0409 | 4.3 | −0.0025 | 0.3 | 0.0132 | 1.4 | 0.0541 |
| VinAI-run1 | −0.0496 | 5.3 | −0.0037 | 0.4 | 0.0187 | 2.0 | 0.0683 |
| LasigBioTM-run1 | −0.0451 | 4.8 | 0.0001 | 0.0 | 0.0106 | 1.1 | 0.0557 |
| BiTeM-run3 | −0.0457 | 5.0 | −0.0043 | 0.5 | 0.0174 | 1.9 | 0.0631 |
| BiTeM-run2 | −0.0476 | 5.2 | −0.0059 | 0.6 | 0.0209 | 2.3 | 0.0685 |
| NextMove/Minesoft-run1 | −0.0668 | 7.4 | 0.0110 | 1.2 | −0.0042 | 0.5 | 0.0778 |
| NextMove/Minesoft-run2 | −0.0645 | 7.2 | 0.0105 | 1.2 | −0.0036 | 0.4 | 0.0750 |
| NLP@VCU-run1 | −0.0594 | 6.9 | −0.0086 | 1.0 | 0.0279 | 3.2 | 0.0873 |
| KFU_NLP-run1 | −0.0281 | 3.2 | 0.0239 | 2.8 | −0.0367 | 4.2 | 0.0606 |
| NLP@VCU-run2 | −0.0638 | 7.4 | −0.0098 | 1.1 | 0.0312 | 3.6 | 0.0950 |
| NLP@VCU-run3 | −0.0489 | 5.7 | −0.0110 | 1.3 | 0.0300 | 3.5 | 0.0789 |
| KFU_NLP-run2 | −0.0188 | 2.2 | 0.0181 | 2.1 | −0.0280 | 3.3 | 0.0461 |
| NextMove/Minesoft-run3 | −0.0535 | 6.5 | 0.0154 | 1.9 | −0.0147 | 1.8 | 0.0689 |
| KFU_NLP-run3 | −0.0088 | 1.1 | 0.0170 | 2.1 | −0.0280 | 3.5 | 0.0450 |
| BiTeM-run1 | −0.0596 | 7.4 | −0.0071 | 0.9 | 0.0248 | 3.1 | 0.0844 |
| OntoChem-run1 | −0.0632 | 9.3 | 0.0206 | 3.0 | 0.0288 | 4.2 | 0.0920 |
| AUKBC-run1 | −0.0067 | 1.3 | 0.0495 | 9.7 | 0.0782 | 15.4 | 0.0849 |
| AUKBC-run2 | 0.0559 | 20.3 | 0.0080 | 2.9 | 0.0885 | 32.2 | 0.0805 |
| SSN_NLP-run1 | −0.0263 | 11.4 | −0.0015 | 0.6 | 0.0090 | 3.9 | 0.0353 |
| SSN_NLP-run2 | −0.0255 | 11.1 | −0.0036 | 1.6 | 0.0122 | 5.3 | 0.0377 |
| JU_INDIA-run1 | 0.1432 | 137.6 | −0.0031 | 3.0 | −0.0238 | 22.9 | 0.1670 |
| JU_INDIA-run2 | 0.0047 | 22.8 | −0.0009 | 4.4 | 0.0058 | 28.2 | 0.0067 |
| JU_INDIA-run3 | 0.0047 | 22.8 | −0.0009 | 4.4 | 0.0058 | 28.2 | 0.0067 |
Q1 is omitted since it's empty. Qi: change in F.
F1 scores of all runs on S1 to S4.
| Melaxtech-run1 | 0.0309 | 0.0084 | −0.0047 | −0.0329 | 0.0638 |
| Melaxtech-run2 | 0.0286 | 0.0125 | 0.0003 | −0.0424 | 0.0710 |
| Melaxtech-run3 | 0.0287 | 0.0098 | −0.0024 | −0.0362 | 0.0649 |
| VinAI-run2 | 0.0335 | 0.0101 | −0.0066 | −0.0359 | 0.0694 |
| VinAI-run1 | 0.0384 | 0.0148 | 0.0006 | −0.0536 | 0.0920 |
| LasigBioTM-run1 | 0.0430 | 0.0251 | 0.0190 | −0.0884 | 0.1314 |
| BiTeM-run3 | 0.0345 | 0.0317 | 0.0043 | −0.0764 | 0.1109 |
| BiTeM-run2 | 0.0463 | 0.0371 | 0.0153 | −0.1006 | 0.1469 |
| NextMove/Minesoft-run1 | 0.0595 | 0.0233 | −0.0283 | −0.0521 | 0.1116 |
| NextMove/Minesoft-run2 | 0.0601 | 0.0232 | −0.0283 | −0.0521 | 0.1122 |
| NLP@VCU-run1 | 0.0792 | 0.0469 | 0.0043 | −0.1186 | 0.1978 |
| KFU_NLP-run1 | 0.0657 | 0.0358 | −0.0308 | −0.0640 | 0.1297 |
| NLP@VCU-run2 | 0.0829 | 0.0495 | 0.0066 | −0.1271 | 0.2100 |
| NLP@VCU-run3 | 0.0861 | 0.0459 | 0.0026 | −0.1203 | 0.2064 |
| KFU_NLP-run2 | 0.0922 | 0.0571 | −0.0035 | −0.1269 | 0.2191 |
| NextMove/Minesoft-run3 | 0.0541 | 0.0172 | −0.0140 | −0.0524 | 0.1065 |
| KFU_NLP-run3 | 0.0832 | 0.0849 | 0.0349 | −0.1727 | 0.2576 |
| BiTeM-run1 | 0.0604 | 0.0949 | 0.0449 | −0.2162 | 0.3111 |
| OntoChem-run1 | 0.0589 | 0.0317 | 0.0050 | −0.1062 | 0.1651 |
| AUKBC-run1 | 0.0912 | 0.0426 | 0.0504 | −0.1707 | 0.2619 |
| AUKBC-run2 | 0.0611 | 0.0487 | −0.0217 | −0.0742 | 0.1353 |
| SSN_NLP-run1 | 0.0382 | 0.0266 | 0.0439 | −0.0500 | 0.0939 |
| SSN_NLP-run2 | 0.0535 | 0.0211 | 0.0456 | −0.0526 | 0.1061 |
| JU_INDIA-run1 | 0.0330 | 0.0501 | −0.0138 | −0.0659 | 0.1160 |
| JU_INDIA-run2 | 0.0071 | 0.0136 | −0.0042 | −0.0176 | 0.0312 |
| JU_INDIA-run3 | 0.0071 | 0.0136 | −0.0042 | −0.0176 | 0.0312 |
S.