A topological derivative-based algorithm to solve optimal control problems with $L^0(\Omega)$ control cost
In this paper, we consider optimization problems with $L^0$-cost of the controls. Here, we take the support of the control as independent optimization variable. Topological derivatives of the corresponding value function with respect to variations of the support are derived. These topological derivatives are used in a novel gradient descent algorithm with Armijo line-search. Under suitable assumptions, the algorithm produces a minimizing sequence.
- Research Article
198
- 10.1021/ie00095a010
- Nov 1, 1989
- Industrial & Engineering Chemistry Research
Accurate solution of differential-algebraic optimization problems
- Research Article
3
- 10.1002/oca.2974
- Jan 17, 2023
- Optimal Control Applications and Methods
Special issue on “Optimal design and operation of energy systems”
- Research Article
2
- 10.7498/aps.66.084501
- Jan 1, 2017
- Acta Physica Sinica
In general, optimal control problems rely on numerically rather than analytically solving methods, due to their nonlinearities. The direct method, one of the numerically solving methods, is mainly to transform the optimal control problem into a nonlinear optimization problem with finite dimensions, via discretizing the objective functional and the forced dynamical equations directly. However, in the procedure of the direct method, the classical discretizations of the forced equations will reduce or affect the accuracy of the resulting optimization problem as well as the discrete optimal control. In view of this fact, more accurate and efficient numerical algorithms should be employed to approximate the forced dynamical equations. As verified, the discrete variational difference schemes for forced Birkhoffian systems exhibit excellent numerical behaviors in terms of high accuracy, long-time stability and precise energy prediction. Thus, the forced dynamical equations in optimal control problems, after being represented as forced Birkhoffian equations, can be discretized according to the discrete variational difference schemes for forced Birkhoffian systems. Compared with the method of employing traditional difference schemes to discretize the forced dynamical equations, this way yields faithful nonlinear optimization problems and consequently gives accurate and efficient discrete optimal control. Subsequently, in the paper we are to apply the proposed method of numerically solving optimal control problems to the rendezvous and docking problem of spacecrafts. First, we make a reasonable simplification, i.e., the rendezvous and docking process of two spacecrafts is reduced to the problem of optimally transferring the chaser spacecraft with a continuously acting force from one circular orbit around the Earth to another one. During this transfer, the goal is to minimize the control effort. Second, the dynamical equations of the chaser spacecraft are represented as the form of the forced Birkhoffian equation. Then in this case, the discrete variational difference scheme for forced Birkhoffian system can be employed to discretize the chaser spacecraft's equations of motion. With further discretizing the control effort and the boundary conditions, the resulting nonlinear optimization problem is obtained. Finally, the optimization problem is solved directly by the nonlinear programming method and then the discrete optimal control is achieved. The obtained optimal control is efficient enough to realize the rendezvous and docking process, even though it is only an approximation of the continuous one. Simulation results fully verify the efficiency of the proposed method for numerically solving optimal control problems, if the fact that the time step is chosen to be very large to limit the dimension of the optimization problem is noted.
- Research Article
391
- 10.1115/1.1483351
- Jul 1, 2002
- Applied Mechanics Reviews
Practical Methods for Optimal Control using Nonlinear Programming
- Conference Article
3
- 10.7148/2009-0352-0358
- Jun 9, 2009
A neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints. INTRODUCTION Optimal control of nonlinear systems is one of the most active subjects in control theory. There is rarely an analytical solution although several numerical computation approaches have been proposed (for example, see (Polak, 1997), (Kirk, 1998)) for solving a optimal control problem. Most of the literature that deals with numerical methods for the solution of general optimal control problems focuses on the algorithms for solving discretized problems. The basic idea of these methods is to apply nonlinear programming techniques to the resulting finite dimensional optimization problem (Buskens at al., 2000). When Euler integration methods are used, the recursive structure of the resulting discrete time dynamic can be exploited in computing first-order necessary condition. In the recent years, the multi-layer feedforward neural networks have been used for obtaining numerical solutions to the optimal control problem. (Padhi at al., 2001), (Padhi et al., 2006). We have taken hyperbolic tangent sigmoid transfer function for the hidden layer and a linear transfer function for the output layer. The paper extends adaptive critic neural network architecture proposed by (Padhi at al., 2001) to the optimal control problems with control and state constraints. The paper is organized as follows. In Section 2, the optimal control problems with control and state constraints are introduced. We summarize necessary optimality conditions and give a short overview of basic result including the iterative numerical methods. Section 3 discusses discretization methods for the given optimal control problem. It also discusses a form of the resulting nonlinear programming problems. Section 4 presents a short description of adaptive critic neural network synthesis for optimal problem with state and control constraints. Section 5 consists of a nitrogen transformation model. In section 6, we apply the discussed methods to the nitrogen transformation cycle. The goal is to compare short-term and long-term strategies of assimilation of nitrogen compounds. Conclusions are presented in Section 7. OPTIMAL CONTROL PROBLEM We consider a nonlinear control problem subject to control and state constraints. Let x(t) ∈ R denote the state of a system and u(t) ∈ R the control in a given time interval [t0, tf ]. Optimal control problem is to minimize F (x, u) = g(x(tf )) + ∫ tf t0 f0(x(t), u(t))dt (1)
- Research Article
77
- 10.1007/s10957-011-9904-5
- Sep 1, 2011
- Journal of Optimization Theory and Applications
In this paper, we consider a class of optimal control problems subject to equality terminal state constraints and continuous state and control inequality constraints. By using the control parametrization technique and a time scaling transformation, the constrained optimal control problem is approximated by a sequence of optimal parameter selection problems with equality terminal state constraints and continuous state inequality constraints. Each of these constrained optimal parameter selection problems can be regarded as an optimization problem subject to equality constraints and continuous inequality constraints. On this basis, an exact penalty function method is used to devise a computational method to solve these optimization problems with equality constraints and continuous inequality constraints. The main idea is to augment the exact penalty function constructed from the equality constraints and continuous inequality constraints to the objective function, forming a new one. This gives rise to a sequence of unconstrained optimization problems. It is shown that, for sufficiently large penalty parameter value, any local minimizer of the unconstrained optimization problem is a local minimizer of the optimization problem with equality constraints and continuous inequality constraints. The convergent properties of the optimal parameter selection problems with equality constraints and continuous inequality constraints to the original optimal control problem are also discussed. For illustration, three examples are solved showing the effectiveness and applicability of the approach proposed.
- Research Article
- 10.2514/1.g007311
- May 9, 2023
- Journal of Guidance, Control, and Dynamics
State Transition Tensors for Continuous-Thrust Control of Three-Body Relative Motion
- Research Article
- 10.7916/d82v2ph1
- Jan 1, 2012
Sequential decision making under uncertainty is at the heart of a wide variety of practical problems. These problems can be cast as dynamic programs and the optimal value function can be computed by solving Bellman's equation. However, this approach is limited in its applicability. As the number of state variables increases, the state space size grows exponentially, a phenomenon known as the curse of dimensionality, rendering the standard dynamic programming approach impractical. An effective way of addressing curse of dimensionality is through parameterized value function approximation. Such an approximation is determined by relatively small number of parameters and serves as an estimate of the optimal value function. But in order for this approach to be effective, we need Approximate Dynamic Programming (ADP) algorithms that can deliver `good' approximation to the optimal value function and such an approximation can then be used to derive policies for effective decision-making. From a practical standpoint, in order to assess the effectiveness of such an approximation, there is also a need for methods that give a sense for the suboptimality of a policy. This thesis is an attempt to address both these issues. First, we introduce a new ADP algorithm based on linear programming, to compute value function approximations. LP approaches to approximate DP have typically relied on a natural `projection' of a well studied linear program for exact dynamic programming. Such programs restrict attention to approximations that are lower bounds to the optimal cost-to-go function. Our program -- the `smoothed approximate linear program' -- is distinct from such approaches and relaxes the restriction to lower bounding approximations in an appropriate fashion while remaining computationally tractable. The resulting program enjoys strong approximation guarantees and is shown to perform well in numerical experiments with the game of Tetris and queueing network control problem. Next, we consider optimal stopping problems with applications to pricing of high-dimensional American options. We introduce the pathwise optimization (PO) method: a new convex optimization procedure to produce upper and lower bounds on the optimal value (the `price') of high-dimensional optimal stopping problems. The PO method builds on a dual characterization of optimal stopping problems as optimization problems over the space of martingales, which we dub the martingale duality approach. We demonstrate via numerical experiments that the PO method produces upper bounds and lower bounds (via suboptimal exercise policies) of a quality comparable with state-of-the-art approaches. Further, we develop an approximation theory relevant to martingale duality approaches in general and the PO method in particular. Finally, we consider a broad class of MDPs and introduce a new tractable method for computing bounds by consider information relaxation and introducing penalty. The method delivers tight bounds by identifying the best penalty function among a parameterized class of penalty functions. We implement our method on a high-dimensional financial application, namely, optimal execution and demonstrate the practical value of the method vis-a-vis competing methods available in the literature. In addition, we provide theory to show that bounds generated by our method are provably tighter than some of the other available approaches.
- Book Chapter
- 10.1007/978-981-19-6561-6_3
- Jan 1, 2022
In the last few years, the applicability of the penalty function method, initiated by Zangwill [1] for the constrained optimization problem, has grown significantly. The penalty function approach transforms the constrained optimization problem into an unconstrained optimization problem and preserves the optimality of the original one. In this way, the solution sets of the unconstrained optimization problems ideally converge to the solution sets of the constrained optimization problems. The idea behind the penalty function approach (the convergence of the solution sets for constrained optimization problem and its associated unconstrained optimization problem) encourages the researchers to establish the equivalence between the solution set of constrained and unconstrained problems under suitable assumptions for different kinds of optimization problems. Antczak [2] used an exact \(l_{1}\) penalty function method in convex nondifferentiable multi-objective optimization problem and established the equivalence between the solution set of the original problem and its associated penalized problem. Also, Alvarez [3], Antczak [4] and Liu and Feng [5] explored the exponential penalty function method for multi-objective optimization problem and established the relationships between the constrained and unconstrained optimization problems. On the other hand, Li et al. [6] used the penalty function method to solve the continuous inequality constrained optimal control problem. Thereafter, Jayswal and Preeti [7] extended the applicability of the penalty function method for the multi-dimensional optimization problem. Moreover, Jayswal et al. [8] explored the same for uncertain optimization problem under convexity assumptions.
- Research Article
13
- 10.1080/01630563.2013.806546
- Aug 3, 2013
- Numerical Functional Analysis and Optimization
In this article, we study an abstract constrained optimization problem that appears commonly in the optimal control of linear partial differential equations. The main emphasis of the present study is on the case when the ordering cone for the optimization problem has an empty interior. To circumvent this major difficulty, we propose a new conical regularization approach in which the main idea is to replace the ordering cone by a family of dilating cones. We devise a general regularization approach and use it to give a detailed convergence analysis for the conical regularization as well as a related regularization approach. We showed that the conical regularization approach leads to a family of optimization problems that admit regular multipliers. The approach remains valid in the setting of general Hilbert spaces and it does not require any sort of compactness or positivity condition on the operators involved. One of the main advantages of the approach is that it is amenable for numerical computations. We consider four different examples, two of them elliptic control problems with state constraints, and present numerical results that completely support our theoretical results and confirm the numerical feasibility of our approach. The motivation for the conical regularization is to overcome the difficulties associated with the lack of Slater's type constraint qualification, which is a common hurdle in numerous branches of applied mathematics including optimal control, inverse problems, vector optimization, set-valued optimization, sensitivity analysis, variational inequalities, among others.
- Research Article
2
- 10.3934/jimo.2018072
- Jun 4, 2018
- Journal of Industrial & Management Optimization
The global optimal solution for the optimal switching problem is considered in discrete time, where these subsystems are linear and the cost functional is quadratic. The optimal switching problem is a discrete optimization problem. Complete enumeration search is always required to find the global optimal solution, which is very expensive. Relaxation method is an effective method to transform the discrete optimization problem into the continuous optimization problem, while the optimal solution is always not the feasible solution of the discrete optimization problem. In this paper, we propose a special class of relaxation method to transform the optimal switching problem into a relaxed optimization problem. We prove that the optimal solution of this modified relaxed optimization problem is exactly that of the optimal switching problem. Then, the global optimal solution can be obtained by solving the continuous optimization problem easily. Numerical examples are demonstrated to show that the modified relaxation method is efficient and effective to obtain the global optimal solution.
- Research Article
5
- 10.1007/s11044-024-09965-5
- Jan 29, 2024
- Multibody System Dynamics
The optimization of multibody systems requires accurate and efficient methods for sensitivity analysis. The adjoint method is probably the most efficient way to analyze sensitivities, especially for optimization problems with numerous optimization variables. This paper discusses sensitivity analysis for dynamic systems in gradient-based optimization problems. A discrete adjoint gradient approach is presented to compute sensitivities of equality and inequality constraints in dynamic simulations. The constraints are combined with the dynamic system equations, and the sensitivities are computed straightforwardly by solving discrete adjoint algebraic equations. The computation of these discrete adjoint gradients can be easily adapted to deal with different time integrators. This paper demonstrates discrete adjoint gradients for two different time-integration schemes and highlights efficiency and easy applicability. The proposed approach is particularly suitable for problems involving large-scale models or high-dimensional optimization spaces, where the computational effort of computing gradients by finite differences can be enormous. Three examples are investigated to validate the proposed discrete adjoint gradient approach. The sensitivity analysis of an academic example discusses the role of discrete adjoint variables. The energy optimal control problem of a nonlinear spring pendulum is analyzed to discuss the efficiency of the proposed approach. In addition, a flexible multibody system is investigated in a combined optimal control and design optimization problem. The combined optimization provides the best possible mechanical structure regarding an optimal control problem within one optimization.
- Research Article
5
- 10.2514/1.j052006
- Nov 28, 2012
- AIAA Journal
F ROM the view point of practical computation, multidisciplinary design optimization (MDO) can be considered as methods for solving complex optimization problems. We think that MDO is, in some sense, a bridge between the conventional optimization algorithms and complex applications. There are two main strategies used in these MDO methods, which are approximation and decomposition. Although these two strategies are not mutually exclusive, they are applicable to problems with different properties, respectively. Some reported flight-vehicle configuration shape-optimizationdesign problems, integrated with complex analysis models (e.g., computational fluid dynamics or computational structural mechanics), have a small number of design variables and constraints [1,2]. Methods using approximation, such as surrogate-based methods, are applicable to these problems in which the original complex analysis models are replaced by corresponding approximate and relatively simple models, such as radial basis function and Kriging models. The optimization computation is then performed based on these approximate models. On the other hand, a flight-vehicle trajectory optimization-design problemwith constraints of differential equations, also called optimal control, can be viewed as an infinite-dimensional extension of a common nonlinear optimization problem [3], which is a practical solution that is to convert the infinite-dimensional problem into a finite-dimensional problem. Several conversion methods, for example, direct shooting, multiple direct shooting, collocation, and pseudospectral, have been developed [3,4]. In some cases, the conversion will result in a very high-dimensional nonlinear optimization problem with a large number of design variables and constraints [5]. Some optimizers for large-scale optimizations, such as SNOPT [6], have been presented. In this Note, an alternative solution using collaborative optimization (CO), an MDO method with decomposition strategy, is discussed. Compared with approximation, decomposition ismore applicable to this kind of large-scale problem in which the original large-scale problem is decomposed into several reduced subproblems. That is to say, the original complex computation task of one optimization is decomposed into several relatively small computation tasks of several optimizations. Although the computational difficulties in CO have not been solved ideally, the decomposition strategy is a natural and potential way for solving optimization problems with a large number of design variables and constraints. As far as we know, the solution to this kind of large-scale optimization problem converted from optimal control problems by using decomposedmethod has not been reported, which is the main motivation of the work in this Note. The organization of this Note is as follows: in Sec. II, we briefly review the collocation method for converting an optimal control problem into a nonlinear optimization problem, and then the discussions on the decomposition of the converted problem usingCO are presented in Sec. III. In Sec. IV, a numerical test case illustrates the discussions in Secs. II and III. The conclusions are stated in Sec. V.
- Conference Article
2
- 10.1109/cdc.1986.267278
- Dec 1, 1986
Quasi Newton methods play an important role in the numerical solution of problems in unconstrained optimization. Optimal control problems in their discretized form can be viewed as optimization problems and therefore be solved by quasi Newton methods. Since the discretized problems do not solve the original infinite-dimensional control problem but rather approximate it up to a certain accuracy, various approximations of the control problem need to be considered. It is known that an increase in the dimension of optimization problems can have a negative effect on the convergence rate of the quasi Newton method which is used to solve the problem. We want to investigate this behavior and to explain how this drawback can be avoided for a class of optimal control problems. We show how to use the infinite dimensional original problem to predict the speed of convergence of the BFGS-method [1, 7, 10, 22] for the finite-dimensional approximations. In several papers [6, 14, 24, 27] the DFP-method [4, 8] and its application to optimal control problems were considered but rates of convergence were given at best for quadratic problems. In [25, 26] a linear rate of convergence was proved in Hilbert spaces and applied to optimal control. All the applications to optimal control problems were carried out for finite dimensional approximations. This fact is important, because in [23] it was shown, that contrary to the finite dimensional case [2], the BFGS-method can converge very slowly when applied to an infinite dimensional problem. Hence it is desirable to know whether this convergence behavior can occur also for fine discretizations of control problems. Sufficient ([19]) and characteristic ([12]) conditions for the superlinear rate were given in other analyses. Like in the linear case for Broyden's method [28] and the conjugate gradient method [3], [9] an additional assumption on the initial approximation of the Hessian, i.e. it approximates the true Hessian up to a compact operator, is needed to guarantee superlinear convergence, see [11]. In [9] a connection to quadratic control problems is shown. Here we want to consider nonlinear control problems and their discretization.
- Research Article
8
- 10.1007/s004490050424
- Jan 1, 1998
- Bioprocess Engineering
Decomposition method for solving two optimal control problems and one optimization problem in batch fermentation is proposed. The problems are formulated based on a nonstructured mathematical model with slowly varying parameters and a finite cost criterion of maximum end production. Dependence of the model parameters on one physical or chemical parameter, which could easily be used as a control input is introduced analytically in the model equations and three model descriptions are obtained by nonlinear difference equations. Sensitivity functions of state trajectories towards slowly varying coefficients are introduced to account for model uncertainties. Based on them extended optimal control and optimization problems are formulated.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.