Abstract
A computational approach based on differential transformation is proposed to solve optimal control problems of dynamical systems. The optimal control law is constructed by solving a two-point-boundary value problem or a Hamilton-Jacobi-Bellman partial differential equation. Using differential transformation, ordinary or partial differential equations are transformed into a system of nonlinear algebraic equations. By using the inverse transformation, the optimal solution is computed in the form of a finite series of a chosen basis system. The differential transformation approach has been shown to be simple for implementation, flexible in handling optimal control problems with various types of dynamics, and computationally efficient. The performance of the proposed approach is demonstrated through numerical examples.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.