Abstract
Modern sensor networks—those used for autonomous driving, security systems, human motion tracking, or smart city/smart factory applications—are shifting to a more centralized data processing approach to enable efficient multimodal sensor fusion for optimal environment perception in complex dynamic situations. Among lidars and cameras, radars are typical for these applications, but they generate huge amounts of data, which cannot be transmitted or stored effectively in current setups. Consequently, manufacturers usually have to process the data “on sensor.” This results in transmitting only a few extracted features as point clouds or object lists to a central processing unit, which usually causes a significant loss of information. With this approach, advanced processing—such as enhancement of resolution by coherent combination of sensors or ghost target removal with advanced algorithms—is hardly possible. To overcome this, we suggest an alternative method by using signal-based compression with defined losses. The following topology will be proposed: the sensors encode raw data without prior radar-specific processing and after transmission, a central unit decodes and processes the radar data, thus benefiting from its more powerful heterogeneous processing system. We will analyze lossless compression algorithms with rate savings of about 30% to 65%, but the focus is on lossy compression algorithms that incorporate higher compression ratios by allowing negligible errors. It is shown that state-of-the-art multimedia compression algorithms can obtain rate savings of 99%, and radar specific algorithms can add a 50-fold gain on top, reaching 99.98%. To assess the distortions of compressed data, we then present different radar-specific evaluation metrics.
Highlights
Radar-based monitoring of the environment has gained increasing attention over the last decade because these systems provide a very robust way of detecting and mapping threedimensional (3D) objects
Each radar sensor processes the received data and sends features via the “slow” vehicular network to a central processing unit, which fuses the data from different radars and other sensors [6]
RADAR SIGNAL MODEL we demonstrate that radar data may be interpreted as being multidimensional sinusoidal signals
Summary
Radar-based monitoring of the environment has gained increasing attention over the last decade because these systems provide a very robust way of detecting and mapping threedimensional (3D) objects. Each radar sensor processes the received data and sends features via the “slow” vehicular network to a central processing unit, which fuses the data from different radars and other sensors [6] This fusion enables a precise and enhanced understanding of the vehicle’s environment and is a key factor to the development of autonomous driving. The designated processing unit (“On” or “Off”-Vehicle) decompresses this data and applies the advanced or combined processing and fusions them with other sensors for generation of the output features Through this process, the vehicular processing system has instant access to data from all radar modules. This processing unit has fewer restrictions in terms of power consumption or computational power than the radar sensors at their exposed installation position In this step, the compressed data can be simultaneously stored for both. The last section confirms the theoretical results by analyzing real-world measurements
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.