Abstract

The optimal control of multi-input system can be described by a multiplayer nonzero-sum differential game. This article theoretically presents an event-based adaptive learning scheme to approximate the Nash equilibrium, and practically addresses the cruise control problem for Caltech vehicle systems. This design is deployed in two aspects. On one hand, the reinforcement learning is implemented through critic neural network architecture and recalling stored experience data. On the other hand, in view of that each player’s preference is different, the decentralized triggering manner is considered to reduce communication. Based on the continuous state, the local sampled state is defined for each player, and a static triggering mechanism is formulated first. The decentralized dynamic triggering is then promoted by designing an auxiliary variable whose dynamics are constructed using static triggering information. Next, the proposed learning scheme is examined on a four-player numerical system. Finally, the learning-based controller is tested on a single-vehicle system under different tracking commands, and then, it is extended to multivehicle systems to realize cooperative optimization by introducing a novel game-in-game structure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call