Abstract
Human gestures recognition is a complex visual recognition task where motion across time distinguishes the type of action. Automatic systems tackle this problem using complex machine learning architectures and training datasets. In recent years, the use and success of robust deep learning techniques was compatible with the availability of a great number of these sets. This paper presents SL-Animals-DVS, an event-based action dataset captured by a Dynamic Vision Sensor (DVS). The DVS records humans performing sign language gestures of various animals as a continuous spike flow at very low latency. This is especially suited for sign language gestures which are usually made at very high speeds. We also benchmark the recognition performance on this data using two state-of-the-art Spiking Neural Networks (SNN) recognition systems. SNNs are naturally compatible to make use of the temporal information that is provided by the DVS where the information is encoded in the spike times. The dataset has about 1100 samples of 58 subjects performing 19 sign language gestures in isolation at different scenarios, providing a challenging evaluation platform for this emerging technology.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.