Abstract

Radar sensors are considered to be very robust under harsh weather and poor lighting conditions. Largely owing to this reputation, they have found broad application in driver assistance and highly automated driving systems. However, radar sensors have considerably lower precision than cameras. Low sensor precision causes ambiguity in the human interpretation of the measurement data and makes the data labeling process difficult and expensive. On the other hand, without a large amount of high-quality labeled training data, it is difficult, if not impossible, to ensure that the supervised machine learning models can predict, classify, or otherwise analyze the phenomenon of interest with the required accuracy. This paper presents a method for fusing the radar sensor measurements with the camera images. A proposed fully-unsupervised machine learning algorithm converts the radar sensor data to artificial, camera-like, environmental images. Through such data fusion, the algorithm produces more consistent, accurate, and useful information than that provided solely by the radar or the camera. The essential point of the work is the proposal of a novel Conditional Multi-Generator Generative Adversarial Network (CMGGAN) that, being conditioned on the radar sensor measurements, can produce visually appealing images that qualitatively and quantitatively contain all environment features detected by the radar sensor.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.