Abstract
Given a nonlinear system and a performance index to be minimized, we present a general approach to expressing the finite time optimal feedback control law applicable to different types of boundary conditions. Starting from the necessary conditions for optimality represented by a Hamiltonian system, we solve the Hamilton-Jacobi equation for a generating function for a specific canonical transformation. This enables us to obtain the optimal feedback control for fundamentally different sets of boundary conditions only using a series of algebraic manipulations and partial differentiations. Furthermore, the proposed approach reveals an insight that the optimal cost functions for a given dynamical system can be decomposed into a single generating function that is only a function of the dynamics plus a term representing the boundary conditions. This result is formalized as a theorem. The whole procedure provides an advantage over methods rooted in dynamic programming, which require one to solve the Hamilton-Jacobi-Bellman equation repetitively for each type of boundary condition. The cost of this favorable versatility is doubling the dimension of the partial differential equation to be solved.
Original language | English |
---|---|
Pages (from-to) | 869-875 |
Number of pages | 7 |
Journal | Automatica |
Volume | 42 |
Issue number | 5 |
DOIs | |
Publication status | Published - 2006 May |
Bibliographical note
Funding Information:The authors acknowledge support from NSF Grant CMS-0408542.
All Science Journal Classification (ASJC) codes
- Control and Systems Engineering
- Electrical and Electronic Engineering