This research is dedicated to developing a min–max robust control strategy for a dynamic game involving pursuers, evaders, and defenders in a multiple-missile scenario. The approach employs neural dynamic programming, utilizing multiple continuous differential neural networks (DNNs). The competitive controller devised addresses the robust optimization of a joint cost function that relies on the trajectories of the pursuer–evader–defender system, accommodating an uncertain mathematical model while adhering to control restrictions. The dynamic programming min–max formulation facilitates robust control by accounting for bounded modeling uncertainties and external disturbances for each game component. The value function of the Hamilton–Jacobi–Bellman (HJB) equation is approximated by a DNN, enabling the estimation of the closed-loop formulation for the joint dynamic game with state restrictions. The controller’s design is grounded in estimating the state trajectory under the worst possible uncertainties and perturbations, providing a robustness factor through the robust neural controller. The learning law class for the time-varying weights in the DNN is generated by studying the HJB partial differential equation for the missile motion for each player in the dynamic game. The controller incorporates the solution of the obtained learning laws and a time-varying Riccati equation, offering an online solution to the control implementation. A recurrent algorithm, based on the Kiefer–Wolfowitz method, adjusts the initial conditions for the weights to satisfy the final condition of the given cost function for the dynamic game. A numerical example is presented to validate the proposed robust control methodology, confirming the optimization solution based on the DNN approximation for Bellman’s value function.
Read full abstract