Abstract

Recently, object detection methods based on multimodal fusion have gained widespread adoption in autonomous driving, proving to be valuable for detecting objects in dynamic environments. Among them, millimetre wave (mmWave) radar is commonly utilized as an effective complement to cameras, as it is almost unaffected by harsh weather conditions. However, current approaches that fuse mmWave radar and camera often overlook the correlation between the two modalities, failing to fully exploit their complementary features. To address this, we propose a temporal-enhanced radar and camera fusion network to explore the correlation between these two modalities and learn a comprehensive representation for object detection. In our model, a temporal fusion model is introduced to fuse mmWave radar features from different moments, thus mitigating the problem of mmWave radar point-object mismatch due to object movement. Moreover, a new correlation-based fusion strategy using the dedicated mask cross attention is proposed to fuse mmWave radar and vision features more effectively. Finally, we design a gate feature pyramid network that selects shallow texture information based on deep semantic information to obtain more representative features. The experimental results on the nuScenes benchmark demonstrate the effectiveness of our proposed method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.