Abstract

Radar-based gesture recognition can play a vital role in autonomous vehicles’ interaction with vulnerable road users (VRUs). However, in automotive scenarios the same gesture produces strongly differing radar responses owing to the wide range of variations such as position, orientation, or ego-motion. Since including all kinds of modifications in a measured dataset is laborious, gesture simulations alleviate the measurement effort and increase the robustness against edge and corner cases. Hence, this article presents a flexible geometric human target model allowing the direct introduction of a wide range of modifications, while it facilitates the handling of shadowing effects and multiradar constellations. Using the proposed simulation model, a dataset recorded with a radar sensor network consisting of three chirp sequence (CS) radars is resimulated based on the motion data simultaneously captured with a stereo video system. Completely substituting the measured by the simulated data for training, a convolutional neural network (CNN) classifier still achieves 80.4% cross-validation accuracy on a challenging gesture set, compared with 89.4% for training on the measured data. Moreover, using the simulated data the classifier is shown to successfully generalize to new scenarios not observed in measurements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call