A numerical method for an optimal control problem with minimum sensitivity on coefficient variation
A numerical method for an optimal control problem with minimum sensitivity on coefficient variation
- Research Article
- 10.2514/1.g007311
- May 9, 2023
- Journal of Guidance, Control, and Dynamics
State Transition Tensors for Continuous-Thrust Control of Three-Body Relative Motion
- Research Article
69
- 10.1137/s0363012901385769
- Jan 1, 2002
- SIAM Journal on Control and Optimization
This work is concerned with the maximum principles for optimal control problems governed by 3-dimensional Navier--Stokes equations. Some types of state constraints (time variables) are considered.
- Research Article
11
- 10.1007/s10898-008-9319-5
- Jul 16, 2008
- Journal of Global Optimization
In this paper, we present a new approach to solve a class of optimal discrete-valued control problems. This type of problem is first transformed into an equivalent two-level optimization problem involving a combination of a discrete optimization problem and a standard optimal control problem. The standard optimal control problem can be solved by existing optimal control software packages such as MISER 3.2. For the discrete optimization problem, a discrete filled function method is developed to solve it. A numerical example is solved to illustrate the efficiency of our method.
- Book Chapter
- 10.1007/978-1-4471-4757-2_4
- Jan 1, 2013
In this chapter, optimal state feedback control problems of nonlinear systems with time delays are studied. In general, the optimal control for time-delay systems is an infinite-dimensional control problem, which is very difficult to solve and there is presently no good method for dealing with this problem. In this chapter, the optimal state feedback control problems of nonlinear systems with time delays both in states and controls are investigated. By introducing a delay matrix function, the explicit expression of the optimal control function can be obtained. Next, for nonlinear time-delay systems with saturating actuators, we further study the optimal control problem using a nonquadratic functional, where two optimization processes are developed for searching the optimal solutions. The above two results are for the infinite-horizon optimal control problem. To the best of our knowledge, there are no results on the finite-horizon optimal control of nonlinear time-delay systems. Hence, in the last part of this chapter, a novel optimal control strategy is developed to solve the finite-horizon optimal control problem for a class of time-delay systems.KeywordsOptimal ControllerDiscrete Nonlinear Time-delay SystemFinite Horizon Optimal Control ProblemActuator SaturationInfinite Dimensional Control ProblemsThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
- Research Article
- 10.2478/prolas-2023-0030
- Dec 1, 2023
- Proceedings of the Latvian Academy of Sciences. Section B. Natural, Exact, and Applied Sciences.
We consider the approximate solution of the control problem with minimum energy for an object described by the heat equation, with the process described by the linear equation of parabolic type and the system controlled by impulsive external influences. Our optimal control problem deals with finding a control parameter belonging to the class of admissible controls that provides the desired temperature distribution in a finite time with minimal energy consumption (energy consumption is described by the quadratic functional). Previous works dedicated to optimal impulse control problems have mostly used the Pontryagin’s maximum principle. However, from a practical point of view, this approach does not lead to satisfactory results. This is due to the fact that the corresponding boundary value problems in this case have no solution in a traditional class of absolutely continuous trajectories. In this work, we propose a method based on the moment relations. We seek for the approximate solution of the corresponding boundary value problem in the form of finite Fourier sum and state our optimal control problem in a finite-dimensional phase space. As a result, we obtain an optimal impulse control problem in a finite-dimensional function space. Taking into account the given condition for a finite time, we reduce the obtained problem to the L-problem of moments. Thus, the problem of finding a control parameter is reduced to the solution of the system of Fredholm integral equations of the first kind, with the norm of the sought solution not exceeding a given number. By Levi’s theorem, every element of Hilbert space can be represented by the sum of the elements of two orthogonal subspaces. This assertion makes it possible to find control parameters in analytical form. We also establish the convergence of the chosen approximation.
- Research Article
391
- 10.1115/1.1483351
- Jul 1, 2002
- Applied Mechanics Reviews
Practical Methods for Optimal Control using Nonlinear Programming
- Research Article
29
- 10.1007/s12273-012-0061-z
- Mar 3, 2012
- Building Simulation
The current study investigates the optimal operation of an air-to-water heat pump system. To this end, the control problem is formulated as a classic optimal control or dynamic optimization problem. As conflicting objectives arise, namely, minimizing energy cost while maximizing thermal comfort, the optimization problem is tackled from a multi-objective optimization perspective. The adopted system model incorporates the building dynamics and the heat pump characteristics. Because of the state-dependency of the coefficient of performance (COP), the optimal control problem (OCP) is nonlinear. If the COP is approximated by a constant value, the OCP becomes convex, which is easier to solve. The current study investigates how this approximation affects the control performance. The optimal control problems are solved using the freely available Automatic Control And Dynamic Optimization toolkit ACADO. It is found that the lower the weighting factor for thermal discomfort is, the higher the discrepancy is between the nonlinear and convex OCP formulations. For a weighting factor resulting in a quadratic mean difference of 0.5°C between the zone temperature and its reference temperature, the difference in electricity cost amounts to 4% for a first scenario with fixed electricity price, and up to 6% for a second scenario with a day and night variation in electricity price.
- Research Article
- 10.25972/opus-18217
- Jan 1, 2019
A sequential quadratic Hamiltonian scheme for solving optimal control problems with non-smooth cost functionals
- Research Article
23
- 10.3934/jimo.2017042
- Apr 1, 2017
- Journal of Industrial & Management Optimization
In this paper, we consider a class of optimal switching control problems with multiple time-delays and a cost on changing control and subject to terminal state constraints. A computational method involving three stages is developed to solve this class of optimal control problems. First, by parameterizing the control function with piecewise-constant functions, the optimal switching control problem is approximated by a sequence of finite-dimensional optimization problems, where the original switching times, the control heights and the control switching times are decision variables. Second, by introducing new variables, the total variation of the control variables is transformed into an equivalently smooth function. Third, we convert the constrained optimization problem into one only with box constraints by an exact penalty function method. The gradients of the cost functional are then derived, which can be combined with any gradient-based optimization method to determine the optimal solution. Finally, a numerical example is given to illustrate the effectiveness of the proposed algorithm.
- Research Article
3
- 10.3934/dcdsb.2019092
- Jan 1, 2019
- Discrete & Continuous Dynamical Systems - B
We discuss and compare numerical methods to solve singular optimal control problems by the direct method. Our discussion is illustrated by an Autonomous Underwater Vehicle (AUV) problem with state constraints. For this problem, we test four different approaches to solve numerically our problem via the direct method. After discretizing the optimal control problem we solve the resulting optimization problem with (ⅰ) A Mathematical Programming Language ($ \text{AMPL} $), (ⅱ) the Imperial College London Optimal Control Software ($ \text{ICLOCS} $), (ⅲ) the Gauss Pseudospectral Optimization Software ($ \text{GPOPS} $) as well as with (ⅳ) a new algorithm based on mixed-binary non-linear programming reported in [7]. This algorithm consists on converting the optimal control problem to a Mixed Binary Optimal Control (MBOC) problem which is then transcribed to a mixed binary non-linear programming problem ($ \text{MBNLP} $) problem using Legendre-Radau pseudospectral method. Our case study shows that, in contrast with the first three approaches we test (all relying on $ \text{IPOPT} $ or other numerical optimization software packages like $ \text{KNITRO} $), the $ \text{MBOC} $ approach detects the structure of the AUV's problem without a priori information of optimal control and computes the switching times accurately.
- Research Article
- 10.1007/s11982-008-1007-8
- Apr 5, 2008
- Russian Mathematics
This work is dedicated to the necessary and sufficient conditions for minimizing sequences in problems with inexact initial data. These conditions are tightly bound with the classical Pontryagin’s maximum principle. The paper also covers regularizing properties of these sequences and those of the maximum principle itself, considering a minimizing sequence (rather than the classical optimal control) as the central theoretical notion. It is well-known that Pontryagin’s maximum principle [1] results from real needs, first of all, applied studies ([2], P. 7). However, in most papers which deal with the theory of the necessary conditions in the optimal control, the initial data in problems under consideration are assumed to be known exactly. The papers on optimal control problems, where studying the necessary and sufficient conditions, one takes into account (somehow or other) the possibility of inexact definition of input data, are relatively few [3, 4]. At the same time, it seems natural to develop the theory of the necessary and sufficient conditions in order to tolerate inexact definition of initial data. Consider, for comparison, the development of solution methods for optimization and optimal control problems [5], the theory of ill-posed problems [6]. In favor of this observation, let us adduce the following arguments. First, in numerous applications one inevitably encounters the necessity to use inexact initial data. Second, in the analysis of solution algorithms for optimization and optimal control problems, the necessary and sufficient optimality conditions play the most important role. Finally, third, generally speaking, optimal control problems represent a class of mathematical problems, where the instability of the initial data with respect to a perturbation is anticipated.
- Research Article
2
- 10.2307/2153386
- Oct 1, 1995
- Mathematics of Computation
1 A Survey on Computational Optimal Control.- Issues in the Direct Transcription of Optimal Control Problems to Sparse Nonlinear Programs.- Optimization in Control of Robots.- Large-scale SQP Methods and their Application in Trajectory Optimization.- Solving Optimal Control and Pursuit-Evasion Game Problems of High Complexity.- 2 Theoretical Aspects of Optimal Control and Nonlinear Programming.- Continuation Methods In Boundary Value Problems.- Second Order Optimality Conditions for Singular Extremals.- Synthesis of Adaptive Optimal Controls for Linear Dynamic Systems.- Control Applications of Reduced SQP Methods.- Time Optimal Control of Mechanical Systems.- 3 Algorithms for Optimal Control Calculations.- Second Order Algorithm for Time Optimal Control of a Linear System.- An SQP-type Solution Method for Constrained Discrete-Time Optimal Control Problems.- Numerical Methods for Solving Differential Games, Prospective Applications to Technical Problems.- Construction of the Optimal Feedback Controller for Constrained Optimal Control Problems with Unknown Disturbances.- Repetitive Optimization for Predictive Control of Dynamic Systems under Uncertainty.- Optimal Control of Multistage Systems Described by High-Index Differential-Algebraic Equations.- A New Class of a High Order Interior Point Method for the Solution of Convex Semiinfinite Optimization Problems.- A Structured Interior Point SQP Method for Nonlinear Optimal Control Problems.- 4 Software for Optimal Control Calculations.- Automated Approach for Optimizing Dynamic Systems.- ANDECS: A Computation Environment for Control Applications of Optimization.- Application of Automatic Differentiation to Optimal Control Problems.- OCCAL: A mixed symbolic-numeric Optimal Control CALculator.- 5 Applications of Optimal Control.- A Robotic Satellite with Simplified Design.- Nonlinear Control under Constraints of a Biological System.- An Object-Oriented Approach to Optimally Describe and Specify a SCADA System Applied to a Power Network.- Near-Optimal Flight Trajectories Generated by Neural Networks.- Performance of a Feedback Method with Respect to Changes in the Air-Density during the Ascent of a Two-Stage-To-Orbit Vehicle.- Linear Optimal Control for Reentry Flight.- Steady-State Modelling of Turbine Engine with Controllers.- Shortest Paths for Satellite Mounted Robot Manipulators.- Optimal Control of the Industrial Robot Manutec r3.
- Research Article
3
- 10.1002/oca.2974
- Jan 17, 2023
- Optimal Control Applications and Methods
Special issue on “Optimal design and operation of energy systems”
- Conference Article
- 10.23919/chicc.2018.8484142
- Jul 1, 2018
This paper mainly talks about some equivalences of the optimal time control problems dominated by the impulsive ordinary differential equation. As the pulse has equipped with the quality of instantaneous effect, so, there is no fundamental difference between integral optimal control problems dominated by impulsive differential system and corresponding differential equation, which is also dominated by integral optimal control problems. And this is well proved by current studies, thus the instantaneous effect has been averaged. However, in terms of the optimal time control problems, the pulse has definite impacts on the optimal time because the optimal time is instantaneous value. Thus, one of the purposes, to study optimal time control problems dominated by the impulsive differential system, is to discover the influences of the pulse puts on the default properties of differential system from the aspect of optimal time control. In this paper, we offered a method to define optimal time and made a further study on the equivalence relation among optimal time control problems, optimal norm control problems and optimal control problems. During this process, we had finished some relevant targets which mentioned in this paper.
- Research Article
33
- 10.1016/j.nonrwa.2012.10.017
- Nov 7, 2012
- Nonlinear Analysis: Real World Applications
A class of optimal state-delay control problems
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.