Literature DB >> 33679967

MTQA: Text-Based Multitype Question and Answer Reading Comprehension Model.

Deguang Chen1, Ziping Ma2, Lin Wei1, Jinlin Ma1, Yanbin Zhu1.   

Abstract

Text-based multitype question answering is one of the research hotspots in the field of reading comprehension models. Multitype reading comprehension models have the characteristics of shorter time to propose, complex components of relevant corpus, and greater difficulty in model construction. There are relatively few research works in this field. Therefore, it is urgent to improve the model performance. In this paper, a text-based multitype question and answer reading comprehension model (MTQA) is proposed. The model is based on a multilayer transformer encoding and decoding structure. In the decoding structure, the headers of the answer type prediction decoding, fragment decoding, arithmetic decoding, counting decoding, and negation are added for the characteristics of multiple types of corpora. Meanwhile, high-performance ELECTRA checkpoints are employed, and secondary pretraining based on these checkpoints and an absolute loss function are designed to improve the model performance. The experimental results show that the performance of the proposed model on the DROP and QUOREF corpora is better than the best results of the current existing models, which proves that the proposed MTQA model has high feature extraction and relatively strong generalization capabilities.
Copyright © 2021 Deguang Chen et al.

Entities:  

Mesh:

Year:  2021        PMID: 33679967      PMCID: PMC7910065          DOI: 10.1155/2021/8810366

Source DB:  PubMed          Journal:  Comput Intell Neurosci


  1 in total

Review 1.  A Comprehensive Survey of Abstractive Text Summarization Based on Deep Learning.

Authors:  Mengli Zhang; Gang Zhou; Wanting Yu; Ningbo Huang; Wenfen Liu
Journal:  Comput Intell Neurosci       Date:  2022-08-01
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.