Literature DB >> 31425057

Deep Reinforcement Learning for Sequence-to-Sequence Models.

Yaser Keneshloo, Tian Shi, Naren Ramakrishnan, Chandan K Reddy.   

Abstract

In recent times, sequence-to-sequence (seq2seq) models have gained a lot of popularity and provide state-of-the-art performance in a wide variety of tasks, such as machine translation, headline generation, text summarization, speech-to-text conversion, and image caption generation. The underlying framework for all these models is usually a deep neural network comprising an encoder and a decoder. Although simple encoder-decoder models produce competitive results, many researchers have proposed additional improvements over these seq2seq models, e.g., using an attention-based model over the input, pointer-generation models, and self-attention models. However, such seq2seq models suffer from two common problems: 1) exposure bias and 2) inconsistency between train/test measurement. Recently, a completely novel point of view has emerged in addressing these two problems in seq2seq models, leveraging methods from reinforcement learning (RL). In this survey, we consider seq2seq problems from the RL point of view and provide a formulation combining the power of RL methods in decision-making with seq2seq models that enable remembering long-term memories. We present some of the most recent frameworks that combine the concepts from RL and deep neural networks. Our work aims to provide insights into some of the problems that inherently arise with current approaches and how we can address them with better RL models. We also provide the source code for implementing most of the RL models discussed in this paper to support the complex task of abstractive text summarization and provide some targeted experiments for these RL models, both in terms of performance and training time.

Year:  2019        PMID: 31425057     DOI: 10.1109/TNNLS.2019.2929141

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw Learn Syst        ISSN: 2162-237X            Impact factor:   10.451


  3 in total

Review 1.  Conversational Agents: Goals, Technologies, Vision and Challenges.

Authors:  Merav Allouch; Amos Azaria; Rina Azoulay
Journal:  Sensors (Basel)       Date:  2021-12-17       Impact factor: 3.576

2.  Spatial relation learning in complementary scenarios with deep neural networks.

Authors:  Jae Hee Lee; Yuan Yao; Ozan Özdemir; Mengdi Li; Cornelius Weber; Zhiyuan Liu; Stefan Wermter
Journal:  Front Neurorobot       Date:  2022-07-28       Impact factor: 3.493

3.  Attention-Based Deep Multiple-Instance Learning for Classifying Circular RNA and Other Long Non-Coding RNA.

Authors:  Yunhe Liu; Qiqing Fu; Xueqing Peng; Chaoyu Zhu; Gang Liu; Lei Liu
Journal:  Genes (Basel)       Date:  2021-12-19       Impact factor: 4.096

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.