Abstract

For optimal control problems in Meyer form with all controls appearing only linearly in the equations of motion, this paper presents a method for determining the optimal switching structure and for calculating suboptimal approximations to the optimal control solution. The method does not require a priori knowledge about the optimal solution or user provided initial guesses except in the free final time case, in which case a rough guess for the final time is required. By discretizing the control functions of time to piecewise constant functions on a user chosen equidistant subdivision of the total time interval, the optimal control problem is reduced to a finite dimensional nonlinear programming problem. Initial guesses for a gradient search method are found by employing a genetic algorithm (GA). High efficiency of the GA is achieved by representing each control value by a single digit binary number (substring length I), hence allowing the controls to take on only their upper and lower limits. As a numerical example, minimum time spacecraft reorientation trajectories are generated. By comparing with the known optimal control solutions, it is shown that the method never failed to correctly determine the optimal switching structure. Introduction In the collocation method, which is the basis for the OTIS program, both, controls and states are approximated by third order splines. This approach dramatically eases the search for initial guesses as it allows for state and control histories that are inconsistent with respect to the differential equations connecting them. Of course, this advantage has to be paid for by an increase in the problem dimension stemming from the additional parameters to represent the state histories. For a fixed machine capacity, this translates into a reduced resoSenior Project Engineer, Member AIAA t Senior Project Engineer, Member AIAA t Project Engineer Copyright 01992 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved lution of the optimal control actions within the chosen finite dimensional parameter space. This may be a serious restriction for problems where controls appear only linearly in the equations of motion. It is well known that for such problems the optimal control strategy consists of some sequence of bang-bang arcs, where the controls are riding their upper or lower limits, and, possibly, singular arcs, where controls assume intermediate values. The switching structure, i.e. the sequence in which different arcs turn active, is not known in advance and has to be determined by the analyst. Of course, a low resolution suboptimal control function of time may smear out the crisp switching between arcs of different control logics, making it impossible to correctly identify the optimal switching structure. The discretization proposed in this paper is as follows: First the time interval on which the trajectory is defined is divided into, say, N subintervals. On every subinterval, each of the controls is kept constant, and the numerical valuea of the controls are defined as the parameters to be adjusted by the parameter optimization software such that the boundary conditions of the trajectory are satisfied and the performance index is minimized. Hence, with m controls and N subintervals, this yields a set of m . N parameters to be optimized. Without loss of generality, all controls can be considered bounded between 0 and 1, thus implying the same constraints on each of the m. N parameters of the nonlinear programming problem. This discretization is designed such that a genetic algorithm (GA) can be employed in a very efficient way to generate initial guesses to be utilized by a gradient search method. Explicitly, the GA minimizes a weighted sum of the original cost function and the absolute values of the amounts by which the boundary conditions are violated. High efficiency is achieved by representing each of the m . N control parameters bi a binary substring of length one. A theoretical justification for this approach lies in the fact that for linearly appearing control rapid chattering of the control values between their upper and lower limit can produce average state rates arbitrarily close to those associated with intermediate control values. The state vector 2(t) is the unique solution of the initial value problem Problem Formulation We consider optimal control problems of the following I(,) = a(e(t),t) + C z h ( i . ( t ) l t) . i i ( t)? 0 ) general form: 2(to) = xo. min * (~( t j ) I t / ) (1) It is well known [I] that f(t,) is a smooth function of u ~ ( P W C [ t o , t / ] ) ~ the parameters 4 , i = 1, .., m, j = 1, .., N, and the final subject to the equations of motion time 1,. Hence, with vector U defined by

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call