Literature DB >> 28055911

Value and Policy Iterations in Optimal Control and Adaptive Dynamic Programming.

Dimitri P Bertsekas.   

Abstract

In this paper, we consider discrete-time infinite horizon problems of optimal control to a terminal set of states. These are the problems that are often taken as the starting point for adaptive dynamic programming. Under very general assumptions, we establish the uniqueness of the solution of Bellman's equation, and we provide convergence results for value and policy iterations.

Year:  2015        PMID: 28055911     DOI: 10.1109/TNNLS.2015.2503980

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw Learn Syst        ISSN: 2162-237X            Impact factor:   10.451


  1 in total

Review 1.  Uncertainty quantification and optimal decisions.

Authors:  C L Farmer
Journal:  Proc Math Phys Eng Sci       Date:  2017-04-26       Impact factor: 2.704

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.