Abstract

Safe autonomous driving can not be realizable unless with robust environment perception. However, robustness can only be guaranteed if the system functions reliably in all weather conditions with any darkness level, while capturing all corner cases. Perception has heavily relied on cameras for object detection purposes. Due to operating at visible-light frequencies, there exists a plethora of corner cases for a sole camera-based perception system. In this paper, radar data is constructively fused with the RGB images for improving perception performance. Radar data that is in the form of point cloud is pre-processed by domain conversion from a bird-eye-view perspective into an image coordinate system. These alongside with RGB images from the camera are given as inputs to our proposed fusion network, which extracts the features of each sensor independently. These features are then fused to perform a joint detection. The robustness in adverse conditions like fog is validated via synthetically foggified images for different levels of fog densities. A channel attention module is integrated into the fusion network, which helps to prevent the drop in performance up to a fog density of 25. The network is trained and tested on NuScenes [1] dataset. Our proposed fusion network is capable of outperforming the other state-of-the-art radar-camera fusion networks by at least 8%.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call