| Literature DB >> 35951673 |
Mansour Al Sulaiman1,2, Abdullah M Moussa3, Sherif Abdou3, Hebah Elgibreen4,2,5, Mohammed Faisal6,2,5, Mohsen Rashwan7.
Abstract
Semantic Textual Similarity (STS) is the task of identifying the semantic correlation between two sentences of the same or different languages. STS is an important task in natural language processing because it has many applications in different domains such as information retrieval, machine translation, plagiarism detection, document categorization, semantic search, and conversational systems. The availability of STS training and evaluation data resources for some languages such as English has led to good performance systems that achieve above 80% correlation with human judgment. Unfortunately, such required STS data resources are not available for many languages like Arabic. To overcome this challenge, this paper proposes three different approaches to generate effective STS Arabic models. The first one is based on evaluating the use of automatic machine translation for English STS data to Arabic to be used in fine-tuning. The second approach is based on the interleaving of Arabic models with English data resources. The third approach is based on fine-tuning the knowledge distillation-based models to boost their performance in Arabic using a proposed translated dataset. With very limited resources consisting of just a few hundred Arabic STS sentence pairs, we managed to achieve a score of 81% correlation, evaluated using the standard STS 2017 Arabic evaluation set. Also, we managed to extend the Arabic models to process two local dialects, Egyptian (EG) and Saudi Arabian (SA), with a correlation score of 77.5% for EG dialect and 76% for the SA dialect evaluated using dialectal conversion from the same standard STS 2017 Arabic set.Entities:
Mesh:
Year: 2022 PMID: 35951673 PMCID: PMC9371328 DOI: 10.1371/journal.pone.0272991
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.752
Fig 1The main framework of BERT.
Fig 2Given parallel data from two languages, a student model can be trained such that the generated vectors for the two languages sentences are close to the teacher language sentence vector.
Examples of different levels of correlation between the sentences in STS dataset.
| Correlation | Example |
|---|---|
|
|
|
| I don’t see why there should be any problem with this whatsoever. | |
| I don’t see why that should be a problem. | |
|
|
|
| A black and white photo of a man driving a car and someone with a motorcycle. | |
| A black and white photo of a man in a classic car and a man with a classic motorcycle. | |
|
| |
| A woman is talking on a cell phone. | |
| A man and woman are talking on the phone. | |
|
|
|
| A man is playing the piano. | |
| A man played the guitar. | |
|
| |
| A person is slicing some onions. | |
| A woman is chopping herbs. | |
|
|
|
| The train heads down the tracks and along the hedge. | |
| A dog on the floor of a patio looks at a cat on the fence. |
Some examples of the proposed translations along with original English sentences.
|
|
Fig 3A framework of models generation using the third approach.
Accuracy of machine translation based and interleaved MSA models tested based on Spearman rank correlation between the cosine similarity of sentence representations and the reference labels of the testing dataset in [8].
| Base Model | Training data | Score |
|---|---|---|
| ArabicBERT bert-base | SNLI and MultiNLI datasets translated using M2M100 model into MSA | 0.4798 |
| ArabicBERT bert-base | SNLI and MultiNLI English datasets | 0.6525 |
| ARBERT | SNLI and MultiNLI English datasets | 0.708 |
| ARBERT | SNLI and MultiNLI English datasets then STS for 1 epoch |
|
Accuracy of knowledge distillation-based MSA models tested based on Spearman rank correlation between the cosine similarity of sentence representations and the reference labels of the testing dataset in [8].
| Base Model | Fine-tuning data | Score |
|---|---|---|
| quora-distilbert-multilingual | translated 1.3K MSA pairs of sentences | 0.7665 |
| distiluse-base-multilingual-cased-v2 | translated 1.3K MSA pairs of sentences | 0.7752 |
| distiluse-base-multilingual-cased-v1 | translated 1.3K MSA pairs of sentences | 0.7778 |
| stsb-xlm-r-multilingual | translated 1.3K MSA pairs of sentences | 0.7785 |
| paraphrase-xlm-r-multilingual-v1 | translated 1.3K MSA pairs of sentences | 0.7918 |
| paraphrase-xlm-r-multilingual-v1 | translated 1.3K MSA pairs of sentences + original Arabic STS | 0.7999 |
| paraphrase-multilingual-mpnet-base-v2 | translated 1.3K MSA pairs of sentences | 0.8012 |
| paraphrase-multilingual-mpnet-base-v2 | translated 1.3K MSA pairs of sentences + original Arabic STS |
|
Accuracy of main Egyptian models tested based on Spearman rank correlation between the cosine similarity of sentence representations and the reference labels of the testing dataset in [8] after translation to Egyptian Arabic.
| Base Model | Fine-tuning data | Score |
|---|---|---|
| paraphrase-multilg-mpnet-base-v2 | translated 1.3K Egyptian pairs of sentences | 0.7345 |
| paraphrase-multilg-mpnet-base-v2 | original Arabic STS then the translated 1.3K Egyptian pairs of sentences | 0.763 |
| paraphrase-xlm-r-multilingual-v1 | original Arabic STS then the translated 1.3K Egyptian pairs of sentences | 0.7647 |
| paraphrase-xlm-r-multilingual-v1 | translated 1.3K Egyptian pairs of sentences |
|
Accuracy of main Saudi Arabian models based on Spearman rank correlation between the cosine similarity of sentence representations and the reference labels of the testing dataset in [8] after translation to Saudi Arabic.
| Base Model | Fine-tuning data | Score |
|---|---|---|
| paraphrase-xlm-r-multilingual-v1 | translated 1.3K Saudi pairs of sentences | 0.7441 |
| paraphrase-xlm-r-multilingual-v1 | original Arabic STS then the translated 1.3K Saudi pairs of sentences | 0.752 |
| distiluse-base-multilingual-cased-v2 | translated 1.3K Saudi pairs of sentences | 0.7608 |
| distiluse-base-multilingual-cased-v2 | original Arabic STS then the translated 1.3K Saudi pairs of sentences |
|
Comparisons between the proposed models and current state-of-the-art Arabic STS models based on Spearman rank correlation between the cosine similarity of sentence representations and the reference labels of the testing dataset in [8].
| Variant | Model | Spearman/Cosine similarity |
|---|---|---|
|
| quora-distilbert-multilingual | 0.7075 |
| distiluse-base-multilingual-cased-v1 | 0.7586 | |
| distiluse-base-multilingual-cased-v2 | 0.7734 | |
| stsb-xlm-r-multilingual | 0.7867 | |
| paraphrase-xlm-r-multilingual-v1 | 0.791 | |
| paraphrase-multilingual-mpnet-base-v2 | 0.791 | |
| proposed MSA model |
| |
|
|
|
|
| quora-distilbert-multilingual | 0.5811 | |
| paraphrase-multilingual-mpnet-base-v2 | 0.6847 | |
| distiluse-base-multilingual-cased-v2 | 0.6950 | |
| stsb-xlm-r-multilingual | 0.7200 | |
| distiluse-base-multilingual-cased-v1 | 0.7237 | |
| paraphrase-xlm-r-multilingual-v1 | 0.7516 | |
| proposed Egyptian model |
| |
|
|
|
|
| quora-distilbert-multilingual | 0.5706 | |
| paraphrase-multilingual-mpnet-base-v2 | 0.6784 | |
| stsb-xlm-r-multilingual | 0.6879 | |
| paraphrase-xlm-r-multilingual-v1 | 0.7145 | |
| distiluse-base-multilingual-cased-v1 | 0.7310 | |
| distiluse-base-multilingual-cased-v2 | 0.7410 | |
| proposed Saudi model |
|