Practical Methods for Optimal Control using Nonlinear Programming
Practical Methods for Optimal Control using Nonlinear Programming
- Research Article
1
- 10.2514/1.g007311
- May 9, 2023
- Journal of Guidance, Control, and Dynamics
State Transition Tensors for Continuous-Thrust Control of Three-Body Relative Motion
- Conference Article
3
- 10.7148/2009-0352-0358
- Jun 9, 2009
A neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints. INTRODUCTION Optimal control of nonlinear systems is one of the most active subjects in control theory. There is rarely an analytical solution although several numerical computation approaches have been proposed (for example, see (Polak, 1997), (Kirk, 1998)) for solving a optimal control problem. Most of the literature that deals with numerical methods for the solution of general optimal control problems focuses on the algorithms for solving discretized problems. The basic idea of these methods is to apply nonlinear programming techniques to the resulting finite dimensional optimization problem (Buskens at al., 2000). When Euler integration methods are used, the recursive structure of the resulting discrete time dynamic can be exploited in computing first-order necessary condition. In the recent years, the multi-layer feedforward neural networks have been used for obtaining numerical solutions to the optimal control problem. (Padhi at al., 2001), (Padhi et al., 2006). We have taken hyperbolic tangent sigmoid transfer function for the hidden layer and a linear transfer function for the output layer. The paper extends adaptive critic neural network architecture proposed by (Padhi at al., 2001) to the optimal control problems with control and state constraints. The paper is organized as follows. In Section 2, the optimal control problems with control and state constraints are introduced. We summarize necessary optimality conditions and give a short overview of basic result including the iterative numerical methods. Section 3 discusses discretization methods for the given optimal control problem. It also discusses a form of the resulting nonlinear programming problems. Section 4 presents a short description of adaptive critic neural network synthesis for optimal problem with state and control constraints. Section 5 consists of a nitrogen transformation model. In section 6, we apply the discussed methods to the nitrogen transformation cycle. The goal is to compare short-term and long-term strategies of assimilation of nitrogen compounds. Conclusions are presented in Section 7. OPTIMAL CONTROL PROBLEM We consider a nonlinear control problem subject to control and state constraints. Let x(t) ∈ R denote the state of a system and u(t) ∈ R the control in a given time interval [t0, tf ]. Optimal control problem is to minimize F (x, u) = g(x(tf )) + ∫ tf t0 f0(x(t), u(t))dt (1)
- Research Article
69
- 10.1137/s0363012901385769
- Jan 1, 2002
- SIAM Journal on Control and Optimization
This work is concerned with the maximum principles for optimal control problems governed by 3-dimensional Navier--Stokes equations. Some types of state constraints (time variables) are considered.
- Research Article
198
- 10.1021/ie00095a010
- Nov 1, 1989
- Industrial & Engineering Chemistry Research
Accurate solution of differential-algebraic optimization problems
- Research Article
136
- 10.2514/3.11428
- Jan 1, 1993
- Journal of Guidance, Control, and Dynamics
One of the most effective numerical techniques for the solution of trajectory optimization and optimal control problems is the direct transcription method. This approach combines a nonlinear programming algorithm with discretization of the trajectory dynamics. The resulting mathematical programming problem is characterized by matrices that are large and sparse. Constraints on the path of the trajectory are then treated as algebraic inequalities to be satisfied by the nonlinear program. This paper describes a nonlinear programming algorithm that exploits the matrix sparsity produced by the transcription formulation. Numerical experience is reported for trajectories with both state and control variable equality and inequality path constraints. T is well known that the solution of an optimal control or trajectory optimization problem can be posed as the solution of a two-point boundary value problem. This problem requires solving a set of nonlinear ordinary differential equations; the first set defined by the vehicle dynamics and the second set (of adjoint differential equations) by the optimality conditions. Boundary conditions are imposed from the problem requirements as well as the optimality criteria. By discretizing the dynamic variables, this boundary value problem can be reduced to the solution of a set of nonlinear algebraic equations. This approach has been successfully utilized1'5 for applications without path constraints. Since the approach requires adjoint equations, it is subject to a number of difficulties. First, the adjoint equations are often very nonlinear and cumbersome to obtain for complex vehicle dynamics, especially when thrust and aerodynamic forces are given by tabular data. Second, the iterative procedure requires an initial guess for the adjoint variables, and this can be quite difficult because they lack a physical interpretation. Third, convergence of the iterations is often quite sensitive to the accuracy of the adjoint guess. Finally, the adjoint variables may be discontinuous when the solution enters or leaves an inequality path constraint. Difficulties associated with adjoint equations are avoided by the direct transcription or collocation methods.6'10 In this approach, the dynamic equations are discretized, and the optimal control problem is transformed into a nonlinear program, which can be solved directly. The nonlinear programming problem is large and sparse and a method for solving it is presented in Ref. 7. This paper extends the method of Ref. 7 to efficiently handle inequality constraints and presents a nonlinear programming algorithm designed to exploit the properties of the problem that results from direct transcription of the trajectory optimization application.
- Research Article
2
- 10.2307/2153386
- Oct 1, 1995
- Mathematics of Computation
1 A Survey on Computational Optimal Control.- Issues in the Direct Transcription of Optimal Control Problems to Sparse Nonlinear Programs.- Optimization in Control of Robots.- Large-scale SQP Methods and their Application in Trajectory Optimization.- Solving Optimal Control and Pursuit-Evasion Game Problems of High Complexity.- 2 Theoretical Aspects of Optimal Control and Nonlinear Programming.- Continuation Methods In Boundary Value Problems.- Second Order Optimality Conditions for Singular Extremals.- Synthesis of Adaptive Optimal Controls for Linear Dynamic Systems.- Control Applications of Reduced SQP Methods.- Time Optimal Control of Mechanical Systems.- 3 Algorithms for Optimal Control Calculations.- Second Order Algorithm for Time Optimal Control of a Linear System.- An SQP-type Solution Method for Constrained Discrete-Time Optimal Control Problems.- Numerical Methods for Solving Differential Games, Prospective Applications to Technical Problems.- Construction of the Optimal Feedback Controller for Constrained Optimal Control Problems with Unknown Disturbances.- Repetitive Optimization for Predictive Control of Dynamic Systems under Uncertainty.- Optimal Control of Multistage Systems Described by High-Index Differential-Algebraic Equations.- A New Class of a High Order Interior Point Method for the Solution of Convex Semiinfinite Optimization Problems.- A Structured Interior Point SQP Method for Nonlinear Optimal Control Problems.- 4 Software for Optimal Control Calculations.- Automated Approach for Optimizing Dynamic Systems.- ANDECS: A Computation Environment for Control Applications of Optimization.- Application of Automatic Differentiation to Optimal Control Problems.- OCCAL: A mixed symbolic-numeric Optimal Control CALculator.- 5 Applications of Optimal Control.- A Robotic Satellite with Simplified Design.- Nonlinear Control under Constraints of a Biological System.- An Object-Oriented Approach to Optimally Describe and Specify a SCADA System Applied to a Power Network.- Near-Optimal Flight Trajectories Generated by Neural Networks.- Performance of a Feedback Method with Respect to Changes in the Air-Density during the Ascent of a Two-Stage-To-Orbit Vehicle.- Linear Optimal Control for Reentry Flight.- Steady-State Modelling of Turbine Engine with Controllers.- Shortest Paths for Satellite Mounted Robot Manipulators.- Optimal Control of the Industrial Robot Manutec r3.
- Conference Article
7
- 10.2514/6.1991-2739
- Aug 12, 1991
One of the most effective numerical techniques for the solution of trajectory optimization and optimal control problems is the direct transcription method. This approach combines a nonlinear programming algorithm with discretization of the trajectory dynamics. The resulting mathematical programming problem is characterized by matrices that are large and sparse. Constraints on the path of the trajectory are then treated as algebraic inequalities to be satisfied by the nonlinear program. This paper describes a nonlinear programming algorithm that exploits the matrix sparsity produced by the transcription formulation. Numerical experience is reported for trajectories with both state and control variable equality and inequality path constraints. T is well known that the solution of an optimal control or trajectory optimization problem can be posed as the solution of a two-point boundary value problem. This problem requires solving a set of nonlinear ordinary differential equations; the first set defined by the vehicle dynamics and the second set (of adjoint differential equations) by the optimality conditions. Boundary conditions are imposed from the problem requirements as well as the optimality criteria. By discretizing the dynamic variables, this boundary value problem can be reduced to the solution of a set of nonlinear algebraic equations. This approach has been successfully utilized1'5 for applications without path constraints. Since the approach requires adjoint equations, it is subject to a number of difficulties. First, the adjoint equations are often very nonlinear and cumbersome to obtain for complex vehicle dynamics, especially when thrust and aerodynamic forces are given by tabular data. Second, the iterative procedure requires an initial guess for the adjoint variables, and this can be quite difficult because they lack a physical interpretation. Third, convergence of the iterations is often quite sensitive to the accuracy of the adjoint guess. Finally, the adjoint variables may be discontinuous when the solution enters or leaves an inequality path constraint. Difficulties associated with adjoint equations are avoided by the direct transcription or collocation methods.6'10 In this approach, the dynamic equations are discretized, and the optimal control problem is transformed into a nonlinear program, which can be solved directly. The nonlinear programming problem is large and sparse and a method for solving it is presented in Ref. 7. This paper extends the method of Ref. 7 to efficiently handle inequality constraints and presents a nonlinear programming algorithm designed to exploit the properties of the problem that results from direct transcription of the trajectory optimization application.
- Conference Article
3
- 10.1115/detc2011-48750
- Jan 1, 2011
This paper presents the implementation of a numerical algorithm for the direct solution of optimal control and parameter identification problems. The problems may include differential equations that define the state, inequality constraints, and equality constraints at the initial and final times. The numerical method is based on transforming the infinite dimensional optimal control problem into a finite dimensional nonlinear programming problem. The transformation technique involves dividing the time interval of interest into a mesh that need not be uniform. In each subinterval of the mesh the control input is approximated using a piecewise polynomial. In particular, the control can be approximated using: (i) piecewise constant, (ii) piecewise linear, or (iii) piecewise cubic polynomials. The explicit Runge-Kutta method is used to obtain an approximate solution of the differential equations that define the state. With the approach used here the states do not appear in the nonlinear programming (NLP) problem. As a result the NLP problem is very compact relative to other numerical methods used to solve nonlinear optimal control problems. The NLP problem is solved using a sequential quadratic programming (SQP) technique. The SQP method is based on minimizing the L1 exact penalty function. Each major step of the SQP method solves a strictly convex quadratic programming problem. The paper also describes a simplified interface to the computer programs that implement the method. An example is presented to demonstrate the algorithm.
- Conference Article
4
- 10.2514/6.2006-6747
- Jun 15, 2006
In this work the problem of optimization of low-thrust space trajectory is addressed. The problem can be stated as the solution of an optimal control problem in which an objective function related to controls is minimized satisfying a series of constraints on the trajectory which are both differential and algebraic. The problem has been faced transcribing the differential constraints with a parallel multiple shooting transcription method into a NLP problem which has been solved with an interior point method. The method that has been developed is particularly suited for the solution of problems in which the trajectory is constrained with a great number of inequalities both on states and controls. As an example of such kind of problem, the method has been applied to the design of reconfiguration maneuvers for spacecrafts flying in formation, where the collision avoidance issue leads to the imposition of a large number of inequalities on states.
- Research Article
8
- 10.1016/j.amc.2011.05.093
- Jul 7, 2011
- Applied Mathematics and Computation
A numerical method for an optimal control problem with minimum sensitivity on coefficient variation
- Research Article
3
- 10.1002/oca.2974
- Jan 17, 2023
- Optimal Control Applications and Methods
Special issue on “Optimal design and operation of energy systems”
- Conference Article
10
- 10.2514/6.1996-3876
- Jul 29, 1996
It is known that the periodic cruise control can save fuel for many aircraft and engine models. This paper applies the periodic cruise control to a hypersonic SCRAMJet transport since fuel saving is critical to its performance in maximizing range. The model was constructed by using numerical data and figures from available literature in the area of space planes. In particular, a heating-rate and a load-factor constraint are considered to make this model more realistic than other previous models. The control on the boundary needs to be determined by a nonlinear programming, and the Lagrange multipliers exhibit jump phenomenon at the entry points of the on-boundary arcs. These constraints increase difficulty in obtaining numerical solution and also increase the sensitivity of the initial guess for a convergence. By assuming the shape of altitude as a sinusoidal function of range and by using a bang-bang thrust control, a sub-optimal solution is obtained for the vehicle without the heating rate and load factor constraints. This sub-optimal solution serves as a very good initial guess for the optimal solution generated by the minimizing-boundary-condition method. The optimal solution shows a fuel saving of 8.12% over the steady-state cruise, a maximum heating rate of 1202.4 watts per square centimeters, and a maximum load factor of 8.27. The heating rate and load factor constraints are then added to the problem. With a maximum heating rate of 400 watts per square centimeters, the fuel saving reduces to 2.45%. With a load factor of seven, the fuel saving does not change much from the non-constrained solution. An optimal periodic-cruise solution with the maximum heating rate of 1158.0 watts per square centimeters and simultaneously with the maximum load factor of seven is also determined with a fuel saving of Nomenclature AS = Inlet area (m^) Aw = Wing area (m^) a = Speed of sound at sea level or at the standard temperature (m/s) bj = Lapse rate between j-th and (j+l)-th junction points (j=0,2,4, and 6) Copyright © 1996 by Chuang and Morimoto. Published by the American Institute of Aeronautics and Astronautics, Inc. with permission. C = Load factor constraint function Cj_> = Drag coefficient CDO = Zero-lift drag coefficient CL = Lift coefficient CLO = Zero angle-of-attack lift coefficient CLCC = Lift-curve dope Cxmax = Maximum thrust coefficient D = Drag(N) f = System dynamics vector g = Gravity acceleration at sea level (m/sec) h = Altitude (km) ha = Amplitude corresponding to frequency co of a specified sinusoidal altitude curve hb = Amplitude corresponding to frequency 2co of a specified sinusoidal altitude curve r^ = Offset of a specified sinusoidal altitude curve !sp = Specific impulse (s) J = Cost function K = Induced drag parameter L = Lift(N) M = Mach number (defined as a normalized velocity by a constant speed of sound at sea level) m = Mass of the vehicle (kg) n = Load factor Umax = Specified maximum load factor Q = Heating rate (w/cm) Qmax = Specified maximum heating rate (w/cm) q = Dynamic pressure (N/m) R = Specific gas constant for air (J/kgK) RQ = Radius of the earth (km) r = Range (km) ri = Coordinate of range of an entry point to a boundary rd = Coordinate of range where a specified throttle falls down from 1 to 0 ru = Coordinate of range where a specified throttle rises up from 0 to 1 S = Heating-rate constraint function s = Throttle T = Thrust (N) Ti = Temperature at i-th junction point (i=0,1, 2, 3,4,5, and 6) (K) American Institute of Aeronautics and Astronautics u = Control vector V = Velocity (m/s) x = State vector cc = Angle of attack (deg) y = Flight-path angle (deg) 1 = Lagrange multiplier vector (i = Lagrange multiplier function associated with a load-factor constraint v = Lagrange multiplier constant associated with periodic boundary condition on the state rc = Lagrange multiplier constant associated with heating rate constraint at an entry point p = Density of air (kg/m) (o = Frequency with respect to range of a specified sinusoidal altitude curve ^ = Lagrange multiplier function associated with a heating-rate constraint Introduction Ever since the fuel efficiency of oscillatory cruise paths over the steady-state cruise paths was first recognized, research on periodic optimal processes has been an active subject, especially for fuel saving considerations for aircraft and hypersonic . The steady-state cruise solution was shown not to be fuel-optimal although it satisfies the first-order necessary conditions along the steady-state path. Fueloptimal periodic trajectories were numerically determined for aircraft and hypersonic vehicles. An optimal periodic control problem for a hypersonic vehicle with a maximum load-factor constraint was recently solved-'. Those studies, however, did not consider aerodynamic heating that vehicles experience along the optimal trajectories. Thus, in order to protect the crew and materials of the vehicle from the heat, it is necessary to impose a restriction on the heating rate. This restriction means that the optimal periodic control problem must be solved including an additional state inequality constraint related to the heating rate constraint. In the following sections, the vehicle's dynamics is stated, the atmospheric density model, the aerodynamicforce models, and the engine thrust model are described. A more realistic aircraft model and a more precise atmospheric model than models used before are adopted here. An approximation to the optimal control solution was obtained by parametrizing the control time histories and using nonlinear programming techniques to obtain the optimum values of the parameters. This solution in turn, which we call it a sub-optimal solution, was used as an initial guess in solving the optimal control problem. The unconstrained optimal periodic solution shows that the vehicle has a peak aerodynamic heating rate of over 1200 watts/cm and a peak load factor of over 8g's along the optimal trajectory. Thus, we next present the optimal periodic control solution constrained by a maximum load factor of 7.0g's and the optimal periodic control solution constrained by a maximum heating rate of 400 watts/cm. The load factor constraint constitutes a state-control inequality constraint, and the maximum heating rate constitutes a state inequality constraint. Finally optimal periodic control solutions are presented with both the maximum load factor and maximum heating rate constraint. Vehicle's Dynamic Model The equations of motion for flight in a vertical plane over the non-rotating spherical Earth with range as the independent variable are dh . .„ h . — = (tany)(l + —) dr R0 dM (Tcosa-D-mgsiny) . h . dr Mamcosy R0 (1)
- Research Article
2
- 10.15622/ia.22.1.4
- Jan 27, 2023
- Информатика и автоматизация
When solving an optimal control problem with both direct and indirect approaches, the main technique is to transfer the optimal control problem from the class of infinite-dimensional optimization to a finite-dimensional one. However, with all these approaches, the result is an open-loop program control that is sensitive to uncertainties, and for the implementation of which in a real object it is necessary to build a stabilization system. The introduction of the stabilization system changes the dynamics of the object, which means that the optimal control and the optimal trajectory should be calculated for the object already taking into account the stabilization system. As a result, it turns out that the initial optimal control problem is complex, and often the possibility of solving it is extremely dependent on the type of object and functionality, and if the object becomes more complex due to the introduction of a stabilization system, the complexity of the problem increases significantly and the application of classical approaches to solving the optimal control problem turns out to be time-consuming or impossible. In this paper, a synthesized optimal control method is proposed that implements the designated logic for developing optimal control systems, overcoming the computational complexity of the problem posed through the use of modern machine learning methods based on symbolic regression and evolutionary optimization algorithms. According to the approach, the object stabilization system is first built relative to some point, and then the position of this equilibrium point becomes a control parameter. Thus, it is possible to translate the infinite-dimensional optimization problem into a finite-dimensional optimization problem, namely, the optimal location of equilibrium points. The effectiveness of the approach is demonstrated by solving the problem of optimal control of a mobile robot.
- Research Article
33
- 10.1016/j.nonrwa.2012.10.017
- Nov 7, 2012
- Nonlinear Analysis: Real World Applications
A class of optimal state-delay control problems
- Book Chapter
- 10.1007/978-1-4471-4757-2_4
- Jan 1, 2013
In this chapter, optimal state feedback control problems of nonlinear systems with time delays are studied. In general, the optimal control for time-delay systems is an infinite-dimensional control problem, which is very difficult to solve and there is presently no good method for dealing with this problem. In this chapter, the optimal state feedback control problems of nonlinear systems with time delays both in states and controls are investigated. By introducing a delay matrix function, the explicit expression of the optimal control function can be obtained. Next, for nonlinear time-delay systems with saturating actuators, we further study the optimal control problem using a nonquadratic functional, where two optimization processes are developed for searching the optimal solutions. The above two results are for the infinite-horizon optimal control problem. To the best of our knowledge, there are no results on the finite-horizon optimal control of nonlinear time-delay systems. Hence, in the last part of this chapter, a novel optimal control strategy is developed to solve the finite-horizon optimal control problem for a class of time-delay systems.KeywordsOptimal ControllerDiscrete Nonlinear Time-delay SystemFinite Horizon Optimal Control ProblemActuator SaturationInfinite Dimensional Control ProblemsThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.