Optimal Control Problem of Evolution Equation Governed by Hypergraph Laplacian
Optimal Control Problem of Evolution Equation Governed by Hypergraph Laplacian
3
- 10.1002/mma.9068
- Jan 27, 2023
- Mathematical Methods in the Applied Sciences
6
- 10.1007/bfb0061795
- Jan 1, 1978
93
- 10.1007/bf02411939
- Dec 1, 1979
- Annali di Matematica Pura ed Applicata
580
- 10.1007/978-1-4419-5542-5
- Jan 1, 2010
84
- 10.1007/bf02761596
- Dec 1, 1975
- Israel Journal of Mathematics
95
- 10.1016/j.aim.2019.05.025
- May 29, 2019
- Advances in Mathematics
1
- 10.2748/tmj.20211202
- Mar 1, 2023
- Tohoku Mathematical Journal
14427
- 10.1016/s0169-7552(98)00110-x
- Apr 1, 1998
- Computer Networks and ISDN Systems
32
- 10.1007/s00245-002-0739-1
- Dec 19, 2002
- Applied Mathematics and Optimization
- 10.1016/j.jmaa.2024.128675
- Jul 10, 2024
- Journal of Mathematical Analysis and Applications
- Research Article
- 10.2514/1.g007311
- May 9, 2023
- Journal of Guidance, Control, and Dynamics
State Transition Tensors for Continuous-Thrust Control of Three-Body Relative Motion
- Research Article
69
- 10.1137/s0363012901385769
- Jan 1, 2002
- SIAM Journal on Control and Optimization
This work is concerned with the maximum principles for optimal control problems governed by 3-dimensional Navier--Stokes equations. Some types of state constraints (time variables) are considered.
- Research Article
198
- 10.1021/ie00095a010
- Nov 1, 1989
- Industrial & Engineering Chemistry Research
Accurate solution of differential-algebraic optimization problems
- Research Article
8
- 10.1016/j.amc.2011.05.093
- Jul 7, 2011
- Applied Mathematics and Computation
A numerical method for an optimal control problem with minimum sensitivity on coefficient variation
- Research Article
391
- 10.1115/1.1483351
- Jul 1, 2002
- Applied Mechanics Reviews
Practical Methods for Optimal Control using Nonlinear Programming
- Conference Article
3
- 10.7148/2009-0352-0358
- Jun 9, 2009
A neural network based optimal control synthesis is presented for solving optimal control problems with control and state constraints. The optimal control problem is transcribed into a nonlinear programming problem which is implemented with adaptive critic neural network. The proposed simulation method is illustrated by the optimal control problem of nitrogen transformation cycle model. Results show that adaptive critic based systematic approach holds promise for obtaining the optimal control with control and state constraints. INTRODUCTION Optimal control of nonlinear systems is one of the most active subjects in control theory. There is rarely an analytical solution although several numerical computation approaches have been proposed (for example, see (Polak, 1997), (Kirk, 1998)) for solving a optimal control problem. Most of the literature that deals with numerical methods for the solution of general optimal control problems focuses on the algorithms for solving discretized problems. The basic idea of these methods is to apply nonlinear programming techniques to the resulting finite dimensional optimization problem (Buskens at al., 2000). When Euler integration methods are used, the recursive structure of the resulting discrete time dynamic can be exploited in computing first-order necessary condition. In the recent years, the multi-layer feedforward neural networks have been used for obtaining numerical solutions to the optimal control problem. (Padhi at al., 2001), (Padhi et al., 2006). We have taken hyperbolic tangent sigmoid transfer function for the hidden layer and a linear transfer function for the output layer. The paper extends adaptive critic neural network architecture proposed by (Padhi at al., 2001) to the optimal control problems with control and state constraints. The paper is organized as follows. In Section 2, the optimal control problems with control and state constraints are introduced. We summarize necessary optimality conditions and give a short overview of basic result including the iterative numerical methods. Section 3 discusses discretization methods for the given optimal control problem. It also discusses a form of the resulting nonlinear programming problems. Section 4 presents a short description of adaptive critic neural network synthesis for optimal problem with state and control constraints. Section 5 consists of a nitrogen transformation model. In section 6, we apply the discussed methods to the nitrogen transformation cycle. The goal is to compare short-term and long-term strategies of assimilation of nitrogen compounds. Conclusions are presented in Section 7. OPTIMAL CONTROL PROBLEM We consider a nonlinear control problem subject to control and state constraints. Let x(t) ∈ R denote the state of a system and u(t) ∈ R the control in a given time interval [t0, tf ]. Optimal control problem is to minimize F (x, u) = g(x(tf )) + ∫ tf t0 f0(x(t), u(t))dt (1)
- Research Article
3
- 10.1002/oca.2974
- Jan 17, 2023
- Optimal Control Applications and Methods
Special issue on “Optimal design and operation of energy systems”
- Research Article
2
- 10.7498/aps.66.084501
- Jan 1, 2017
- Acta Physica Sinica
In general, optimal control problems rely on numerically rather than analytically solving methods, due to their nonlinearities. The direct method, one of the numerically solving methods, is mainly to transform the optimal control problem into a nonlinear optimization problem with finite dimensions, via discretizing the objective functional and the forced dynamical equations directly. However, in the procedure of the direct method, the classical discretizations of the forced equations will reduce or affect the accuracy of the resulting optimization problem as well as the discrete optimal control. In view of this fact, more accurate and efficient numerical algorithms should be employed to approximate the forced dynamical equations. As verified, the discrete variational difference schemes for forced Birkhoffian systems exhibit excellent numerical behaviors in terms of high accuracy, long-time stability and precise energy prediction. Thus, the forced dynamical equations in optimal control problems, after being represented as forced Birkhoffian equations, can be discretized according to the discrete variational difference schemes for forced Birkhoffian systems. Compared with the method of employing traditional difference schemes to discretize the forced dynamical equations, this way yields faithful nonlinear optimization problems and consequently gives accurate and efficient discrete optimal control. Subsequently, in the paper we are to apply the proposed method of numerically solving optimal control problems to the rendezvous and docking problem of spacecrafts. First, we make a reasonable simplification, i.e., the rendezvous and docking process of two spacecrafts is reduced to the problem of optimally transferring the chaser spacecraft with a continuously acting force from one circular orbit around the Earth to another one. During this transfer, the goal is to minimize the control effort. Second, the dynamical equations of the chaser spacecraft are represented as the form of the forced Birkhoffian equation. Then in this case, the discrete variational difference scheme for forced Birkhoffian system can be employed to discretize the chaser spacecraft's equations of motion. With further discretizing the control effort and the boundary conditions, the resulting nonlinear optimization problem is obtained. Finally, the optimization problem is solved directly by the nonlinear programming method and then the discrete optimal control is achieved. The obtained optimal control is efficient enough to realize the rendezvous and docking process, even though it is only an approximation of the continuous one. Simulation results fully verify the efficiency of the proposed method for numerically solving optimal control problems, if the fact that the time step is chosen to be very large to limit the dimension of the optimization problem is noted.
- Research Article
3
- 10.3934/math.2022510
- Jan 1, 2022
- AIMS Mathematics
<abstract><p>This paper considers an optimal feedback control problem for a class of fed-batch fermentation processes. Our main contributions are as follows. Firstly, a dynamic optimization problem for fed-batch fermentation processes is modeled as an optimal control problem of switched dynamical systems, and a general state-feedback controller is designed for this dynamic optimization problem. Unlike the existing switched dynamical system optimal control problem, the state-dependent switching method is applied to design the switching rule, and the structure of this state-feedback controller is not restricted to a particular form. Then, this problem is transformed into a mixed-integer optimal control problem by introducing a discrete-valued function. Furthermore, each of these discrete variables is represented by using a set of 0-1 variables. By using a quadratic constraint, these 0-1 variables are relaxed such that they are continuous on the closed interval $ [0, 1] $. Accordingly, the original mixed-integer optimal control problem is transformed intoa nonlinear parameter optimization problem. Unlike the existing works, the constraint introduced for these 0-1 variables are at most quadratic. Thus, it does not increase the number of locally optimal solutions of the original problem. Next, an improved gradient-based algorithm is developed based on a novel search approach, and a large number of numerical experiments show that this novel search approach can effectively improve the convergence speed of this algorithm, when an iteration is trapped to a curved narrow valley bottom of the objective function. Finally, numerical results illustrate the effectiveness of this method developed by this paper.</p></abstract>
- Conference Article
2
- 10.1109/cdc.1986.267278
- Dec 1, 1986
Quasi Newton methods play an important role in the numerical solution of problems in unconstrained optimization. Optimal control problems in their discretized form can be viewed as optimization problems and therefore be solved by quasi Newton methods. Since the discretized problems do not solve the original infinite-dimensional control problem but rather approximate it up to a certain accuracy, various approximations of the control problem need to be considered. It is known that an increase in the dimension of optimization problems can have a negative effect on the convergence rate of the quasi Newton method which is used to solve the problem. We want to investigate this behavior and to explain how this drawback can be avoided for a class of optimal control problems. We show how to use the infinite dimensional original problem to predict the speed of convergence of the BFGS-method [1, 7, 10, 22] for the finite-dimensional approximations. In several papers [6, 14, 24, 27] the DFP-method [4, 8] and its application to optimal control problems were considered but rates of convergence were given at best for quadratic problems. In [25, 26] a linear rate of convergence was proved in Hilbert spaces and applied to optimal control. All the applications to optimal control problems were carried out for finite dimensional approximations. This fact is important, because in [23] it was shown, that contrary to the finite dimensional case [2], the BFGS-method can converge very slowly when applied to an infinite dimensional problem. Hence it is desirable to know whether this convergence behavior can occur also for fine discretizations of control problems. Sufficient ([19]) and characteristic ([12]) conditions for the superlinear rate were given in other analyses. Like in the linear case for Broyden's method [28] and the conjugate gradient method [3], [9] an additional assumption on the initial approximation of the Hessian, i.e. it approximates the true Hessian up to a compact operator, is needed to guarantee superlinear convergence, see [11]. In [9] a connection to quadratic control problems is shown. Here we want to consider nonlinear control problems and their discretization.
- Research Article
- 10.25972/opus-18217
- Jan 1, 2019
A sequential quadratic Hamiltonian scheme for solving optimal control problems with non-smooth cost functionals
- Research Article
29
- 10.1007/s12273-012-0061-z
- Mar 3, 2012
- Building Simulation
The current study investigates the optimal operation of an air-to-water heat pump system. To this end, the control problem is formulated as a classic optimal control or dynamic optimization problem. As conflicting objectives arise, namely, minimizing energy cost while maximizing thermal comfort, the optimization problem is tackled from a multi-objective optimization perspective. The adopted system model incorporates the building dynamics and the heat pump characteristics. Because of the state-dependency of the coefficient of performance (COP), the optimal control problem (OCP) is nonlinear. If the COP is approximated by a constant value, the OCP becomes convex, which is easier to solve. The current study investigates how this approximation affects the control performance. The optimal control problems are solved using the freely available Automatic Control And Dynamic Optimization toolkit ACADO. It is found that the lower the weighting factor for thermal discomfort is, the higher the discrepancy is between the nonlinear and convex OCP formulations. For a weighting factor resulting in a quadratic mean difference of 0.5°C between the zone temperature and its reference temperature, the difference in electricity cost amounts to 4% for a first scenario with fixed electricity price, and up to 6% for a second scenario with a day and night variation in electricity price.
- Research Article
4
- 10.1007/s11044-024-09965-5
- Jan 29, 2024
- Multibody System Dynamics
The optimization of multibody systems requires accurate and efficient methods for sensitivity analysis. The adjoint method is probably the most efficient way to analyze sensitivities, especially for optimization problems with numerous optimization variables. This paper discusses sensitivity analysis for dynamic systems in gradient-based optimization problems. A discrete adjoint gradient approach is presented to compute sensitivities of equality and inequality constraints in dynamic simulations. The constraints are combined with the dynamic system equations, and the sensitivities are computed straightforwardly by solving discrete adjoint algebraic equations. The computation of these discrete adjoint gradients can be easily adapted to deal with different time integrators. This paper demonstrates discrete adjoint gradients for two different time-integration schemes and highlights efficiency and easy applicability. The proposed approach is particularly suitable for problems involving large-scale models or high-dimensional optimization spaces, where the computational effort of computing gradients by finite differences can be enormous. Three examples are investigated to validate the proposed discrete adjoint gradient approach. The sensitivity analysis of an academic example discusses the role of discrete adjoint variables. The energy optimal control problem of a nonlinear spring pendulum is analyzed to discuss the efficiency of the proposed approach. In addition, a flexible multibody system is investigated in a combined optimal control and design optimization problem. The combined optimization provides the best possible mechanical structure regarding an optimal control problem within one optimization.
- Research Article
- 10.1299/jsmec.44.374
- Jan 1, 2001
- JSME International Journal Series C
Problem to date: Similar in nature to the knapsack problem and the indivisible investment problem, there exists a static optimal problem the variables of which have both discrete and continuous values as the optimum variables: that is, a mixed-integer programming problem as one optimal problem of the mixed programming problems. A relaxed method or a group method etc. has been used for these problems hitherto. Just like the dynamical indivisible investment problem and the resource allotment problem/the disposition of personnel problem through several periods, there are few optimal control methods for the mixed-quantized dynamical optimal control problem which has both discrete and continuous values. The proposed method in this paper: For the mixed-quantized optimal control problem in which the state equation is linear, the control problem is given by the formulation of Halkin’s discrete time optimal control problem. The mixed-quantized discrete maximum principle is given as the algorithm for this control problem. Where, for the maximization of Hamiltonian in each discrete time, the relaxed method—which improved branch and bound method—is used. Effects obtained in this paper: As the applications in this control problem, the above dynamical investment and the allotment problem through the several periods etc. are considered. In this paper, the solution to the discrete time mixed-quantized optimal control problem is given, and the efficiency of this method (mixed-quantized discrete maximum principle)—which is applied infrequently to this field—is shown, along with a numerical example.
- Conference Article
24
- 10.1109/iros.2013.6696470
- Nov 1, 2013
In this paper we investigate the use of optimal control techniques to improve Functional Electrical Stimulation (FES) for drop foot correction on hemiplegic patients. A model of the foot and the tibialis anterior muscle, the contraction of which is controlled by electrical stimulation has been established and is used in the optimal control problem. The novelty in this work is the use of the ankle accelerations and shank orientations (so-called external states) in the model, which have been measured on hemiplegic patients in a previous experiment using Inertial Measurement Units (IMUs). The optimal control problem minimizes the square of muscle excitations which serves the overall goal of reducing energy consumption in the muscle. In a first step, an offline optimal control problem is solved for test purposes and shows the efficiency of the FES optimal control for drop foot correction. In a second step, a Nonlinear Model Predictive Control (NMPC) problem - or online optimal control problem, is solved in a simulated environment. While the ulitmate goal is to use NMPC on the real system, i.e. directly on the patient, this test in simulation was meant to show the feasibility of NMPC for online drop foot correction. In the optimization problem, a set of fixed constraints of foot orientation was applied. Then, an original adaptive constraint taking into account the current ankle height, was introduced and tested. Comparisons between results under fixed and adaptive constraints highlight the advantage of the adaptive constraints in terms of energy consumption, where its quadratic sum of controls, obtained by NMPC, was three times lower than with the fixed constraint. This feasibility study was a first step in application of NMPC on real hemiplegic patients for online FES-based drop foot correction. The adaptive constraints method presents a new and efficient approach in terms of muscular energy consumption minimization.
- New
- Research Article
- 10.1007/s00245-025-10325-8
- Nov 4, 2025
- Applied Mathematics & Optimization
- New
- Research Article
- 10.1007/s00245-025-10333-8
- Oct 28, 2025
- Applied Mathematics & Optimization
- New
- Research Article
- 10.1007/s00245-025-10326-7
- Oct 28, 2025
- Applied Mathematics & Optimization
- New
- Research Article
- 10.1007/s00245-025-10336-5
- Oct 28, 2025
- Applied Mathematics & Optimization
- Research Article
- 10.1007/s00245-025-10335-6
- Oct 25, 2025
- Applied Mathematics & Optimization
- Research Article
- 10.1007/s00245-025-10317-8
- Oct 13, 2025
- Applied Mathematics & Optimization
- Research Article
- 10.1007/s00245-025-10320-z
- Oct 13, 2025
- Applied Mathematics & Optimization
- Research Article
- 10.1007/s00245-025-10329-4
- Oct 1, 2025
- Applied Mathematics & Optimization
- Research Article
- 10.1007/s00245-025-10340-9
- Oct 1, 2025
- Applied Mathematics & Optimization
- Research Article
- 10.1007/s00245-025-10327-6
- Sep 30, 2025
- Applied Mathematics & Optimization
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.