| Literature DB >> 35634104 |
Kholoud Alsubhi1, Amani Jamal1, Areej Alhothali1.
Abstract
Open-domain question answering (OpenQA) is one of the most challenging yet widely investigated problems in natural language processing. It aims at building a system that can answer any given question from large-scale unstructured text or structured knowledge-base. To solve this problem, researchers traditionally use information retrieval methods to retrieve the most relevant documents and then use answer extractions techniques to extract the answer or passage from the candidate documents. In recent years, deep learning techniques have shown great success in OpenQA by using dense representation for document retrieval and reading comprehension for answer extraction. However, despite the advancement in the English language OpenQA, other languages such as Arabic have received less attention and are often addressed using traditional methods. In this paper, we use deep learning methods for Arabic OpenQA. The model consists of document retrieval to retrieve passages relevant to a question from large-scale free text resources such as Wikipedia and an answer reader to extract the precise answer to the given question. The model implements dense passage retriever for the passage retrieval task and the AraELECTRA for the reading comprehension task. The result was compared to traditional Arabic OpenQA approaches and deep learning methods in the English OpenQA. The results show that the dense passage retriever outperforms the traditional Term Frequency-Inverse Document Frequency (TF-IDF) information retriever in terms of the top-20 passage retrieval accuracy and improves our end-to-end question answering system in two Arabic question-answering benchmark datasets.Entities:
Keywords: Arabic open domain question answering; Dense information retrieval approach; Transformer-based models for question answering
Year: 2022 PMID: 35634104 PMCID: PMC9138168 DOI: 10.7717/peerj-cs.952
Source DB: PubMed Journal: PeerJ Comput Sci ISSN: 2376-5992
Figure 1The process of data flow through a DPR model during training.
Where E is the passage encoder, and E is the question encoder adapted from Briggs (2021).
Figure 2Replaced token detection pre-training approach (Antoun, Baly & Hajj, 2021).
Figure 3The dataset Structure for DPR.
Where positive_ctxs: passages that are relevant to the question, negative_ctxs: this was used by the original DPR authors to compare it against the in-batch negatives approach and not used in our DPR so, we set it to an empty list, hard_negative_ctxs: passages that are not related to the question.
Arabic reading comprehension datasets.
| Reference | Name | Train | Test |
|---|---|---|---|
| ( | ARCD | 695 | 700 |
| ( | TyDiQA-GoldP (Arabic) | 14,724 | 921 |
Figure 4Sample prediction of our DPR from the ARCD test.
Figure 5Example of the top two passages retrieved by our DPR, BM25, and TF-IDF.
Results of the DPR model on TyDiQA-GoldP and ARCD datasets with different training settings.
| Training dataset | Testing dataset | Recall | MAP |
|---|---|---|---|
| TyDiQA-GoldP | TydiQA dev set | 98.11 | 93.56 |
| ARCD | ARCD test | 96.13 | 73.68 |
| ARCD+TyDiQA-GoldP | TydiQA dev set | 98.00 | 94.12 |
| ARCD+TyDiQA-GoldP | ARCD test | 93.28 | 68.94 |
The results of DPR in comparison with TF-IDF on TydiQA and ARCD datasets.
| Model | Training dataset | Test dataset | Dataset size | Accuracy Top-20 | Accuracy Top-100 |
|---|---|---|---|---|---|
| TF-IDF | N/A | TydiQA dev set | 921 | 37.03 | 48.70 |
| TF-IDF | N/A | ARCD test | 696 | 33.61 | 40.19 |
| DPR | TyDiQA-GoldP | TydiQA dev set | train:14797 test:917 | 56.54 | 62.96 |
| DPR | ARCD | ARCD test | train:684 test:696 | 46.01 | 55.41 |
| DPR | ARCD+TyDiQA-GoldP | TydiQA dev set | train:15481 test:917 |
|
|
| DPR | ARCD+TyDiQA-GoldP | ARCD test | train:15481 test:696 | 50.56 | 57.26 |
Note: Boldfaced score indicates highest accuracy.
Comparison of two text reader models on ARCD.
| Model | ARCD (F1-EM) |
|---|---|
| mBERT ( | 50.10–23.9 |
| AraELECTRA (ours) |
|
Note: Boldfaced score indicates best performance.
Comparison of the different text reader models on TyDiQA-GoldP.
| Model | TyDiQA-GoldP (F1-EM) |
|---|---|
| mBERT ( | 81.7– |
| AraBERTv0.1 ( | 82.86–68.51 |
| AraBERTv1 ( | 79.36–61.11 |
| AraBERTv0.2-base ( | 85.41–73.07 |
| AraBERTv2-base ( | 81.66–61.67 |
| AraBERTv0.2-large ( | 86.03–73.72 |
| AraBERTv2-large ( | 82.51–64.49 |
| ArabicBERT-base ( | 81.24–67.42 |
| ArabicBERT-large ( | 84.12–70.03 |
| Arabic-ALBERT-base ( | 80.98–67.10 |
| Arabic-ALBERT-large ( | 81.59–68.07 |
| Arabic-ALBERT-xlarge ( | 84.59–71.12 |
| AraELECTRA ( |
|
| AraELECTRA (ours) | 86.01–74.07 |
Note: Boldfaced score indicates best performance.
Figure 6Example of results from the TyDiQA-GoldP development set.
Comparison of our OpenQA and SOQAL when returning the top k answers.
| Model | Evaluation dataset | EM | F1 |
|---|---|---|---|
| SOQAL (top-1) ( | ARCD | 12.8 | 27.6 |
| SOQAL (top-3) ( | ARCD | 17.8 | 37.9 |
| SOQAL (top-5) ( | ARCD | 20.7 | 42.5 |
| our OpenQA (top-1) | ARCD | 15.7 | 36.4 |
| our OpenQA (top-3) | ARCD | 23.1 | 39.6 |
| our OpenQA (top-5) | ARCD |
|
|
Note: Boldfaced score indicates best performance.
End-to-end QA results.
Our DPR is trained using single or merged training datasets, as indicated by the terms single and multi.
| Training setting | Model | Evaluation dataset | EM | F1 |
|---|---|---|---|---|
| Single | ORQA ( | NQ | 33.3 | – |
| TriviaQA | 45.0 | |||
| WQ | 36.4 | |||
| TREC | 30.1 | |||
| SQuAD | 20.2 | |||
| Single | REALM ( | NQ | 39.2 | – |
| WQ | 40.2 | |||
| TREC | 46.8 | |||
| Single | DPR ( | NQ | 41.5 | – |
| TriviaQA | 56.8 | |||
| WQ | 34.6 | |||
| TREC | 25.9 | |||
| SQuAD | 29.8 | |||
| Single | DPR + BM25 ( | NQ | 39.0 | – |
| TriviaQA | 57.0 | |||
| WQ | 35.2 | |||
| TREC | 28.0 | |||
| SQuAD | 36.7 | |||
| Multi | DPR ( | NQ | 41.5 | – |
| TriviaQA | 56.8 | |||
| WQ | 42.4 | |||
| TREC | 49.4 | |||
| SQuAD | 24.1 | |||
| Multi | DPR + BM25 ( | NQ | 38.8 | – |
| TriviaQA | 57.9 | |||
| WQ | 41.1 | |||
| TREC | 50.6 | |||
| SQuAD | 35.8 | |||
| Single | our DPR | TyDiQA-GoldP | 41.8 | 50.1 |
| Single | our DPR | ARCD | 15.1 | 35.3 |
| Multi | our DPR | TyDiQA-GoldP | 43.1 | 51.6 |
| Multi | our DPR | ARCD | 15.7 | 36.3 |