Abstract

For future connected automated vehicles (CAVs) networks, the joint optimization of communication, sensing, and computing resources is crucial to guarantee the performance of cooperative automated driving’s safety, which is attracting more and more attention. However, the existing works have not considered the low-latency requirement for the raw perception data sharing with both wireless communication link capability and computing efficiency constraints, causing a serious threat to the cooperative automated driving’s safety in CAVs networks. In this article, a vehicle–road–base station cooperation architecture is designed, and a federated reinforcement learning (FRL)-based task offloading and resource allocation algorithm in the CAVs network is proposed to reduce the task execution delay with different communication and computing constraints. The problem of execution delay minimization is theoretically formulated and analyzed under three task practical offloading modes. To adapt to the dynamic topology of the CAVs network, we design a deep reinforcement learning algorithm to achieve the optimal task offloading and resource allocation. To further reduce the data transmission overhead of the centralized reinforcement learning algorithm, the FRL-enabled algorithm is proposed to minimize the execution delay of the optimal task offloading and resource allocation among multiple CAVs. Both the simulation and hardware testbed results verify that the proposed algorithms can not only reduce the execution delay and the communication overhead but also improve the system throughput.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call