Abstract

In order to avoid driving distraction caused by traditional buttons or touch screens, gesture recognition technology has begun to be applied to the field of human-car interaction. Online gesture recognition system designed for vehicle needs powerful enough to satisfy the requirements of high classification accuracy, fast response time and low graphics memory consumption. To solve the above challenges, we propose an online gesture recognition algorithm based on RGB camera to identify motion, hand and gesture in sequence. We use the frame difference as a motion detection modality and apply the hand detection neural network to determine whether to activate the gesture classifier. In the gesture classifier, the frame difference is fused with the RGB image at data level based on Efficient Convolutional Network. We combined gesture recognition and Heads-Up Display to create a simulated driving system that allows users control auxiliary information through gestures, which used for usability analysis and user evaluation. For the purpose of finding the gestures that best match the various interactive tasks, we use the entropy weight method to analyze the usability of the gestures in the JESTER dataset and derive seven best gestures. The offline gesture classification accuracy on the JESTER dataset is 95.96% and online recognition algorithm runs on average at 306 fps when there is no motion and 164 fps in the presence of hand. According to the questionnaire results after the subjects used our system, more than 86.25% of the subjects expressed satisfaction with our gesture recognition system.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call