Literature DB >> 18632382

Discrete-time nonlinear HJB solution using approximate dynamic programming: convergence proof.

Asma Al-Tamimi1, Frank L Lewis, Murad Abu-Khalaf.   

Abstract

Convergence of the value-iteration-based heuristic dynamic programming (HDP) algorithm is proven in the case of general nonlinear systems. That is, it is shown that HDP converges to the optimal control and the optimal value function that solves the Hamilton-Jacobi-Bellman equation appearing in infinite-horizon discrete-time (DT) nonlinear optimal control. It is assumed that, at each iteration, the value and action update equations can be exactly solved. The following two standard neural networks (NN) are used: a critic NN is used to approximate the value function, whereas an action network is used to approximate the optimal control policy. It is stressed that this approach allows the implementation of HDP without knowing the internal dynamics of the system. The exact solution assumption holds for some classes of nonlinear systems and, specifically, in the specific case of the DT linear quadratic regulator (LQR), where the action is linear and the value quadratic in the states and NNs have zero approximation error. It is stressed that, for the LQR, HDP may be implemented without knowing the system A matrix by using two NNs. This fact is not generally appreciated in the folklore of HDP for the DT LQR, where only one critic NN is generally used.

Entities:  

Mesh:

Year:  2008        PMID: 18632382     DOI: 10.1109/TSMCB.2008.926614

Source DB:  PubMed          Journal:  IEEE Trans Syst Man Cybern B Cybern        ISSN: 1083-4419


  1 in total

1.  Efficient Actor-Critic Algorithm with Hierarchical Model Learning and Planning.

Authors:  Shan Zhong; Quan Liu; QiMing Fu
Journal:  Comput Intell Neurosci       Date:  2016-10-03
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.