State Transition Tensors for Continuous-Thrust Control of Three-Body Relative Motion
State Transition Tensors for Continuous-Thrust Control of Three-Body Relative Motion
- Research Article
69
- 10.1137/s0363012901385769
- Jan 1, 2002
- SIAM Journal on Control and Optimization
This work is concerned with the maximum principles for optimal control problems governed by 3-dimensional Navier--Stokes equations. Some types of state constraints (time variables) are considered.
- Research Article
8
- 10.1016/j.amc.2011.05.093
- Jul 7, 2011
- Applied Mathematics and Computation
A numerical method for an optimal control problem with minimum sensitivity on coefficient variation
- Research Article
2
- 10.1002/oca.968
- Sep 1, 2010
- Optimal Control Applications and Methods
Process engineers routinely use optimization in designing and operating complex systems as a means to improve their performance. Optimization has thus become a major enabling area over the years, where it has evolved from a methodology of academic interest into a technology that has and continues to make a significant impact on industry. To date, most rigorous optimization implementations have been for the design and operation of lumped systems using steady-state simulation and optimization technologies. However, a majority of natural as well as industrial systems either are inherently transient, have important transients between steady-state phases, and/or are spatially distributed. Interest in dynamic optimization and optimal control of process systems has grown significantly over the last few decades and much progress has been achieved in solution strategies. Despite its great potential, however, seldom has this technology made an impact on the process industry sector yet. Its implementation does not come without a cost. It requires a thorough understanding of the underlying phenomena, which is hardly compatible with the limited effort and time that can usually be spent for modeling. Moreover, a high level of expertise is still needed for the solution of optimal control problems (OCPs), which is itself a consequence of the lack of sufficiently fast and reliable numerical solution techniques. In this special issue of Optimal Control Applications and Methods, we have provided a selection of articles that address some of these issues and apply advanced techniques for the optimal operation, control and estimation of complex process systems. Bonilla et al. [1] propose a new method for solving nonconvex OCPs. Their method relies on a homotopy-based approach, whereby the original nonconvex OCP is gradually transformed into a simpler convex OCP by varying an homotopy parameter. A special structure is assumed for the nonconvex OCP, namely the dynamic system is control-affine and the cost function penalizes deviations from a given reference trajectory, which makes the method well suited for model predictive control (MPC) applications. They demonstrate their methodology on two case studies, a simple parameter estimation problem and the optimal control of an isothermal chemical reactor with Van de Vusse reactions and input multiplicities. They find that the likelihood of finding a global solution to the original nonconvex OCP is greatly improved compared to standard local optimization techniques. The paper by Aliyev and Gatzke [2] presents a nonlinear MPC formulation with prioritized constraint handling. This formulation is particularly relevant for control problems that have relatively limited degrees of freedom compared to the number of control objectives of interest. It ensures that the constrained optimization problem remains feasible at each MPC execution. They develop an implementation of prioritized MPC that is computationally efficient. A closed-loop test on multivariate refinery facility simulation with significant nonlinearity and input multiplicity is investigated. Because first principles’ models are difficult to obtain for such processes, second-order
- Dissertation
- 10.25560/24700
- Mar 1, 2014
This thesis is in the field of Optimal Control. It addresses research questions concerning both the properties of optimal controls and also schemes for control system stabilization based on the solution of optimal control problems. The first part is concerned with the derivation of necessary conditions of optimality for two classes of optimal control problems not covered by earlier theory. The first is the class of optimal control problems with a combination of mixed control-state constraints and pure state constraints in which the dynamics are described by a differential inclusion under weaker hypotheses than have previously been considered. The second is the class of optimal control problems in which the dynamics take the form of a non-smooth differential equation with delays, and where the end-time is included in the decision variables. We shall demonstrate that these new optimality conditions lead to algorithms for solution of certain optimal control problems not amenable to earlier theory. Model Predictive Control (MPC) is an approach to control system design based on solving, at each control update time, an optimal control problem. This is the subject matter of the second part of the thesis. We derive new MPC algorithms for constrained linear and nonlinear systems which, in certain significant respect, are simpler to implement than standard schemes, and which achieve performance specifications under more general conditions than has previously been demonstrated. These include stability and feasibility.
- Research Article
198
- 10.1021/ie00095a010
- Nov 1, 1989
- Industrial & Engineering Chemistry Research
Accurate solution of differential-algebraic optimization problems
- Research Article
391
- 10.1115/1.1483351
- Jul 1, 2002
- Applied Mechanics Reviews
Practical Methods for Optimal Control using Nonlinear Programming
- Book Chapter
- 10.1007/978-1-4471-4757-2_4
- Jan 1, 2013
In this chapter, optimal state feedback control problems of nonlinear systems with time delays are studied. In general, the optimal control for time-delay systems is an infinite-dimensional control problem, which is very difficult to solve and there is presently no good method for dealing with this problem. In this chapter, the optimal state feedback control problems of nonlinear systems with time delays both in states and controls are investigated. By introducing a delay matrix function, the explicit expression of the optimal control function can be obtained. Next, for nonlinear time-delay systems with saturating actuators, we further study the optimal control problem using a nonquadratic functional, where two optimization processes are developed for searching the optimal solutions. The above two results are for the infinite-horizon optimal control problem. To the best of our knowledge, there are no results on the finite-horizon optimal control of nonlinear time-delay systems. Hence, in the last part of this chapter, a novel optimal control strategy is developed to solve the finite-horizon optimal control problem for a class of time-delay systems.KeywordsOptimal ControllerDiscrete Nonlinear Time-delay SystemFinite Horizon Optimal Control ProblemActuator SaturationInfinite Dimensional Control ProblemsThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
- Research Article
44
- 10.1002/aic.690381007
- Oct 1, 1992
- AIChE Journal
This article presents a unified approach to simultaneous solution of optimization and optimal control problems in batch distillation, operating under different modes of operation: variable, constant, or optimal reflux. The simplified, computationally efficient short‐cut method and a novel algorithm to solve the optimal control problems in batch distillation is the basis of this unified approach. The short‐cut method identifies the feasible region of operation essential for optimization and optimal control problems, and provides analytical partial derivatives of the model parameters crucial to the solution.The new algorithm for the solution of optimal combination of the maximum principle and NLP optimization techniques. It circumvents the problems associated with the maximum principle approach (iterative solution of a two‐point boundary value problem, undounded control variables, and inability to handle the simultaneous optimization and optimal control problem), and the coupled ODE discretization‐NLP optimization scheme for nonlinear models (higher system nonlinearities, multiplicity of solutions, sensitivity of convergence to initial guesses). This algorithm reduces the dimensionality of the problem, and the nature of the algorithm allows a common platform to optimal solutions of different operating conditions. This article also shows that different categories of the optimal control problems in batch distillation essentially involve the solution of the maximum distillate problem.
- Conference Article
3
- 10.7148/2009-0352-0358
- Jun 9, 2009
A neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints. INTRODUCTION Optimal control of nonlinear systems is one of the most active subjects in control theory. There is rarely an analytical solution although several numerical computation approaches have been proposed (for example, see (Polak, 1997), (Kirk, 1998)) for solving a optimal control problem. Most of the literature that deals with numerical methods for the solution of general optimal control problems focuses on the algorithms for solving discretized problems. The basic idea of these methods is to apply nonlinear programming techniques to the resulting finite dimensional optimization problem (Buskens at al., 2000). When Euler integration methods are used, the recursive structure of the resulting discrete time dynamic can be exploited in computing first-order necessary condition. In the recent years, the multi-layer feedforward neural networks have been used for obtaining numerical solutions to the optimal control problem. (Padhi at al., 2001), (Padhi et al., 2006). We have taken hyperbolic tangent sigmoid transfer function for the hidden layer and a linear transfer function for the output layer. The paper extends adaptive critic neural network architecture proposed by (Padhi at al., 2001) to the optimal control problems with control and state constraints. The paper is organized as follows. In Section 2, the optimal control problems with control and state constraints are introduced. We summarize necessary optimality conditions and give a short overview of basic result including the iterative numerical methods. Section 3 discusses discretization methods for the given optimal control problem. It also discusses a form of the resulting nonlinear programming problems. Section 4 presents a short description of adaptive critic neural network synthesis for optimal problem with state and control constraints. Section 5 consists of a nitrogen transformation model. In section 6, we apply the discussed methods to the nitrogen transformation cycle. The goal is to compare short-term and long-term strategies of assimilation of nitrogen compounds. Conclusions are presented in Section 7. OPTIMAL CONTROL PROBLEM We consider a nonlinear control problem subject to control and state constraints. Let x(t) ∈ R denote the state of a system and u(t) ∈ R the control in a given time interval [t0, tf ]. Optimal control problem is to minimize F (x, u) = g(x(tf )) + ∫ tf t0 f0(x(t), u(t))dt (1)
- Conference Article
3
- 10.1115/detc2011-48750
- Jan 1, 2011
This paper presents the implementation of a numerical algorithm for the direct solution of optimal control and parameter identification problems. The problems may include differential equations that define the state, inequality constraints, and equality constraints at the initial and final times. The numerical method is based on transforming the infinite dimensional optimal control problem into a finite dimensional nonlinear programming problem. The transformation technique involves dividing the time interval of interest into a mesh that need not be uniform. In each subinterval of the mesh the control input is approximated using a piecewise polynomial. In particular, the control can be approximated using: (i) piecewise constant, (ii) piecewise linear, or (iii) piecewise cubic polynomials. The explicit Runge-Kutta method is used to obtain an approximate solution of the differential equations that define the state. With the approach used here the states do not appear in the nonlinear programming (NLP) problem. As a result the NLP problem is very compact relative to other numerical methods used to solve nonlinear optimal control problems. The NLP problem is solved using a sequential quadratic programming (SQP) technique. The SQP method is based on minimizing the L1 exact penalty function. Each major step of the SQP method solves a strictly convex quadratic programming problem. The paper also describes a simplified interface to the computer programs that implement the method. An example is presented to demonstrate the algorithm.
- Research Article
15
- 10.1016/s0167-6911(03)00120-8
- Apr 2, 2003
- Systems & Control Letters
A variational inequality for measurement feedback almost-dissipative control
- Research Article
- 10.1007/s11982-008-1007-8
- Apr 5, 2008
- Russian Mathematics
This work is dedicated to the necessary and sufficient conditions for minimizing sequences in problems with inexact initial data. These conditions are tightly bound with the classical Pontryagin’s maximum principle. The paper also covers regularizing properties of these sequences and those of the maximum principle itself, considering a minimizing sequence (rather than the classical optimal control) as the central theoretical notion. It is well-known that Pontryagin’s maximum principle [1] results from real needs, first of all, applied studies ([2], P. 7). However, in most papers which deal with the theory of the necessary conditions in the optimal control, the initial data in problems under consideration are assumed to be known exactly. The papers on optimal control problems, where studying the necessary and sufficient conditions, one takes into account (somehow or other) the possibility of inexact definition of input data, are relatively few [3, 4]. At the same time, it seems natural to develop the theory of the necessary and sufficient conditions in order to tolerate inexact definition of initial data. Consider, for comparison, the development of solution methods for optimization and optimal control problems [5], the theory of ill-posed problems [6]. In favor of this observation, let us adduce the following arguments. First, in numerous applications one inevitably encounters the necessity to use inexact initial data. Second, in the analysis of solution algorithms for optimization and optimal control problems, the necessary and sufficient optimality conditions play the most important role. Finally, third, generally speaking, optimal control problems represent a class of mathematical problems, where the instability of the initial data with respect to a perturbation is anticipated.
- Book Chapter
- 10.1007/978-1-4613-0007-6_5
- Jan 1, 2001
The main purpose of the book is the development of numerical methods for the solution of control or optimal control problems, or for the computation of functionals of the stochastic processes of interest, of the type described in Chapters 3, 7–9, and 12–15. It was shown to Chapter 3 that the cost or optimal cost functionals can be the (at least formal) solutions to certain nonlinear partial differential equations. It is tempting to try to solve for or approximate the various cost functions and optimal controls by dealing directly with the appropriate PDE’s, and numerically approximating their solutions. A basic impediment is that the PDE’s often have only a formal meaning, and standard methods of numerical analysis might not be usable to prove convergence of the numerical methods. For many problems of interest, one cannot even write down a partial differential equation. The Bellman equation might be replaced by a system of “variational inequalities,” or the proper form might not be known. Optimal stochastic control problems occur in an enormous variety of forms. As time goes on, we learn more about the analytical methods which can be used to describe and analyze the various optimal cost functions, but even then it seems that many important classes of problems are still not covered and new models appear which need even further analysis. The optimal stochastic control or stochastic modeling problem usually starts with a physical model, which guides the formulation of the precise stochastic process model to be used in the analysis. One would like numerical methods which are able to conveniently exploit the intuition contained in the physical model.KeywordsCost FunctionAdmissible ControlBellman EquationDeterministic ProblemDynamic Programming EquationThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
- Book Chapter
10
- 10.1007/978-3-030-12232-4_12
- Jan 1, 2019
This chapter is concerned with optimal control problems of dynamical systems described by partial differential equations (PDEs). Firstly, using the Dubovitskii-Milyutin approach, we obtain the necessary condition of optimality, i.e., the Pontryagin maximum principle for optimal control problem of an age-structured population dynamics for spread of universally fatal diseases. Secondly, for an optimal birth control problem of a McKendrick type age-structured population dynamics, we establish the optimal feedback control laws by the dynamic programming viscosity solution (DPVS) approach. Finally, for a well-adapted upwind finite-difference numerical scheme for the HJB equation arising in optimal control, we prove its convergence and show that the solution from this finite-difference scheme converges to the value function of the associated optimal control problem.
- Research Article
2
- 10.2307/2153386
- Oct 1, 1995
- Mathematics of Computation
1 A Survey on Computational Optimal Control.- Issues in the Direct Transcription of Optimal Control Problems to Sparse Nonlinear Programs.- Optimization in Control of Robots.- Large-scale SQP Methods and their Application in Trajectory Optimization.- Solving Optimal Control and Pursuit-Evasion Game Problems of High Complexity.- 2 Theoretical Aspects of Optimal Control and Nonlinear Programming.- Continuation Methods In Boundary Value Problems.- Second Order Optimality Conditions for Singular Extremals.- Synthesis of Adaptive Optimal Controls for Linear Dynamic Systems.- Control Applications of Reduced SQP Methods.- Time Optimal Control of Mechanical Systems.- 3 Algorithms for Optimal Control Calculations.- Second Order Algorithm for Time Optimal Control of a Linear System.- An SQP-type Solution Method for Constrained Discrete-Time Optimal Control Problems.- Numerical Methods for Solving Differential Games, Prospective Applications to Technical Problems.- Construction of the Optimal Feedback Controller for Constrained Optimal Control Problems with Unknown Disturbances.- Repetitive Optimization for Predictive Control of Dynamic Systems under Uncertainty.- Optimal Control of Multistage Systems Described by High-Index Differential-Algebraic Equations.- A New Class of a High Order Interior Point Method for the Solution of Convex Semiinfinite Optimization Problems.- A Structured Interior Point SQP Method for Nonlinear Optimal Control Problems.- 4 Software for Optimal Control Calculations.- Automated Approach for Optimizing Dynamic Systems.- ANDECS: A Computation Environment for Control Applications of Optimization.- Application of Automatic Differentiation to Optimal Control Problems.- OCCAL: A mixed symbolic-numeric Optimal Control CALculator.- 5 Applications of Optimal Control.- A Robotic Satellite with Simplified Design.- Nonlinear Control under Constraints of a Biological System.- An Object-Oriented Approach to Optimally Describe and Specify a SCADA System Applied to a Power Network.- Near-Optimal Flight Trajectories Generated by Neural Networks.- Performance of a Feedback Method with Respect to Changes in the Air-Density during the Ascent of a Two-Stage-To-Orbit Vehicle.- Linear Optimal Control for Reentry Flight.- Steady-State Modelling of Turbine Engine with Controllers.- Shortest Paths for Satellite Mounted Robot Manipulators.- Optimal Control of the Industrial Robot Manutec r3.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.