Literature DB >> 26890927

Deep Direct Reinforcement Learning for Financial Signal Representation and Trading.

Yue Deng, Feng Bao, Youyong Kong, Zhiquan Ren, Qionghai Dai.   

Abstract

Can we train the computer to beat experienced traders for financial assert trading? In this paper, we try to address this challenge by introducing a recurrent deep neural network (NN) for real-time financial signal representation and trading. Our model is inspired by two biological-related learning concepts of deep learning (DL) and reinforcement learning (RL). In the framework, the DL part automatically senses the dynamic market condition for informative feature learning. Then, the RL module interacts with deep representations and makes trading decisions to accumulate the ultimate rewards in an unknown environment. The learning system is implemented in a complex NN that exhibits both the deep and recurrent structures. Hence, we propose a task-aware backpropagation through time method to cope with the gradient vanishing issue in deep training. The robustness of the neural system is verified on both the stock and the commodity future markets under broad testing conditions.

Year:  2016        PMID: 26890927     DOI: 10.1109/TNNLS.2016.2522401

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw Learn Syst        ISSN: 2162-237X            Impact factor:   10.451


  9 in total

Review 1.  Machine behaviour.

Authors:  Iyad Rahwan; Manuel Cebrian; Nick Obradovich; Josh Bongard; Jean-François Bonnefon; Cynthia Breazeal; Jacob W Crandall; Nicholas A Christakis; Iain D Couzin; Matthew O Jackson; Nicholas R Jennings; Ece Kamar; Isabel M Kloumann; Hugo Larochelle; David Lazer; Richard McElreath; Alan Mislove; David C Parkes; Alex 'Sandy' Pentland; Margaret E Roberts; Azim Shariff; Joshua B Tenenbaum; Michael Wellman
Journal:  Nature       Date:  2019-04-24       Impact factor: 49.962

2.  Dynamic stock-decision ensemble strategy based on deep reinforcement learning.

Authors:  Xiaoming Yu; Wenjun Wu; Xingchuang Liao; Yong Han
Journal:  Appl Intell (Dordr)       Date:  2022-05-09       Impact factor: 5.019

3.  Biased Pressure: Cyclic Reinforcement Learning Model for Intelligent Traffic Signal Control.

Authors:  Bunyodbek Ibrokhimov; Young-Joo Kim; Sanggil Kang
Journal:  Sensors (Basel)       Date:  2022-04-06       Impact factor: 3.576

4.  Action-specialized expert ensemble trading system with extended discrete action space using deep reinforcement learning.

Authors:  JoonBum Leem; Ha Young Kim
Journal:  PLoS One       Date:  2020-07-27       Impact factor: 3.240

Review 5.  State-of-the-art in artificial neural network applications: A survey.

Authors:  Oludare Isaac Abiodun; Aman Jantan; Abiodun Esther Omolara; Kemi Victoria Dada; Nachaat AbdElatif Mohamed; Humaira Arshad
Journal:  Heliyon       Date:  2018-11-23

6.  Effectively training neural networks for stock index prediction: Predicting the S&P 500 index without using its index data.

Authors:  Jinho Lee; Jaewoo Kang
Journal:  PLoS One       Date:  2020-04-10       Impact factor: 3.240

7.  Structural break-aware pairs trading strategy using deep reinforcement learning.

Authors:  Jing-You Lu; Hsu-Chao Lai; Wen-Yueh Shih; Yi-Feng Chen; Shen-Hang Huang; Hao-Han Chang; Jun-Zhe Wang; Jiun-Long Huang; Tian-Shyr Dai
Journal:  J Supercomput       Date:  2021-08-17       Impact factor: 2.474

Review 8.  Protein design via deep learning.

Authors:  Wenze Ding; Kenta Nakai; Haipeng Gong
Journal:  Brief Bioinform       Date:  2022-05-13       Impact factor: 13.994

9.  An Autonomous Path Planning Model for Unmanned Ships Based on Deep Reinforcement Learning.

Authors:  Siyu Guo; Xiuguo Zhang; Yisong Zheng; And Yiquan Du
Journal:  Sensors (Basel)       Date:  2020-01-11       Impact factor: 3.576

  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.