Abstract

The use of Cooperative Perception (CP) enables Connected and Autonomous Vehicles (CAVs) to exchange objects perceived from onboard sensors (e.g., radars, lidars, and cameras) with other CAVs via CP messages (CPMs) through Vehicle-to-Vehicle (V2V) communication technologies. However, the same objects in the driving environment may simultaneously appear in the line of sight of multiple CAVs. Consequently, this leads to much irrelevant and redundant information being exchanged in the V2V network. This overloads the communication channel and reduces the CPM delivery to CAVs, thereby decreasing CP awareness. To address this issue, we mathematically formulate CP information usefulness as a maximization problem in a multi-CAV environment and introduce a distributed multi-agent deep reinforcement learning approach based on the double deep Q-learning algorithm to solve it. This approach allows each CAV to learn an optimal CPM content selection policy that maximizes the usefulness of surrounding CAVs as much as possible to reduce redundancy in the V2V network. Simulation results highlight that the proposal effectively mitigates object redundancy and improves network reliability, ensuring increased awareness at short and medium distances of less than 200 m compared to state-of-the-art approaches.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call