Abstract
Radar-based gesture recognition can play a vital role in autonomous vehicles’ interaction with vulnerable road users (VRUs). However, in automotive scenarios the same gesture produces strongly differing radar responses owing to the wide range of variations such as position, orientation, or ego-motion. Since including all kinds of modifications in a measured dataset is laborious, gesture simulations alleviate the measurement effort and increase the robustness against edge and corner cases. Hence, this article presents a flexible geometric human target model allowing the direct introduction of a wide range of modifications, while it facilitates the handling of shadowing effects and multiradar constellations. Using the proposed simulation model, a dataset recorded with a radar sensor network consisting of three chirp sequence (CS) radars is resimulated based on the motion data simultaneously captured with a stereo video system. Completely substituting the measured by the simulated data for training, a convolutional neural network (CNN) classifier still achieves 80.4% cross-validation accuracy on a challenging gesture set, compared with 89.4% for training on the measured data. Moreover, using the simulated data the classifier is shown to successfully generalize to new scenarios not observed in measurements.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: IEEE Transactions on Microwave Theory and Techniques
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.