Abstract

Cooperative perception is an effective way for connected autonomous vehicles to extend sensing range, improve detection precision, and thus enhance perception ability by combining their own sensing information with that of other vehicles. The existing cooperation perception schemes share only raw-, feature-, or object-level data, thus lacking the flexibility to adapt to highly dynamic vehicular network conditions, which leads to either bandwidth saturation or bandwidth underutilization, degrading the detection precision in the long run. In this article, we propose ML-Cooper, a multilevel cooperative perception framework, to fully utilize the bandwidth and hence improve detection precision. The key idea of ML-Cooper is to divide each frame of sensing data of the sender vehicle into three parts, and the corresponding raw data, feature data, and object data are transmitted to and fused at the receiver vehicle. We also develop a soft actor–critic (SAC) deep reinforcement learning algorithm to adaptively adjust the proportion of the three parts according to the channel state information of the Vehicle-to-Vehicle (V2V) link. The experimental results on KITTI and our collected data sets on two real vehicles show that ML-Cooper can achieve the highest average detection precision compared to existing single-level cooperative perception schemes.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call