Abstract

Piecewise deterministic control problems are problems involving stochastic disturbance of a special type. In certain situations, in an otherwise deterministic control system, it may happen that the state jumps at certain stochastic points of time. Examples are sudden oil finds, or sudden discoveries of metal deposits. Similarly, in seemingly deterministic processes, the dynamics may suddenly change character: at certain stochastic points in time, the right-hand side of the differential equation governing the system changes form, such changes being effected by jumps in a (dummy) state variable. Examples of such phenomena are sudden inventions, sudden ecological disasters, earthquakes, floods, storms, fires, the sudden capture of a criminal, that suddenly change the prospects of the firm, the society, the agriculture, the criminal... Several papers have discussed such problems, often using more or less ad hoc methods. (Sometimes it is possible to rewrite the problem so that deterministic control theory applies). A systematic method for solving such problems, based on HJB-equation (the Hamilton-Jacoby-Bellman equation) for the problem, is presented in Davis (1993). Markov Models and Optimization, and also briefly discussed below. In this paper a related method, closer to deterministic control theory, is presented first. It is easiest to apply to problems with a bound on the number of possible jumps. Thus, the main purpose of this paper is to show how some piecewise deterministic optimal control problems can be solved by techniques similar to those used in deterministic problems. The paper includes statements of several theoretical results. Proofs are given for the results involving the HJB-equation and fields of extremals, (for the HJB-equation, replicating the ones in Davis (1993)).

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call