Abstract

With intelligent big data, a variety of gesture-based recognition systems have been developed to enable intuitive interaction by utilizing machine learning algorithms. Realizing a high gesture recognition accuracy is crucial, and current systems learn extensive gestures in advance to augment their recognition accuracies. However, the process of accurately recognizing gestures relies on identifying and editing numerous gestures collected from the actual end users of the system. This final end-user learning component remains troublesome for most existing gesture recognition systems. This paper proposes a method that facilitates end-user gesture learning and recognition by improving the editing process applied on intelligent big data, which is collected through end-user gestures. The proposed method realizes the recognition of more complex and precise gestures by merging gestures collected from multiple sensors and processing them as a single gesture. To evaluate the proposed method, it was used in a shadow puppet performance that could interact with on-screen animations. An average gesture recognition rate of 90% was achieved in the experimental evaluation, demonstrating the efficacy and intuitiveness of the proposed method for editing visualized learning gestures.

Highlights

  • The development of technologies, which are based on intelligent big data such as virtual reality and invoked reality, have contributed to the increase in research interest in natural user interfaces (NUI) and natural user experience (NUX)

  • The successful implementation of applications based on gesture recognition requires a high gesture recognition accuracy, which requires defining and learning end-user gestures by immense collected gestures in advance

  • We develop a method wherein various gestures are precisely learnt by collecting them from multiple sensors

Read more

Summary

Introduction

The development of technologies, which are based on intelligent big data such as virtual reality and invoked reality, have contributed to the increase in research interest in natural user interfaces (NUI) and natural user experience (NUX). A generic gesture recognition and learning framework is developed, which utilizes heterogeneous sensors, and enables end-users to modify their gestures based on intelligent big data. It is difficult to accurately identify gestures without using heterogeneous sensors because of the limited range and recognition ability of each sensor. A suitable approach for utilizing heterogeneous sensors to increase the gesture recognition accuracy must be identified. If the obtained results are not satisfactory, the end-user must be able to re-edit the learning data. The method comprises four end-user interfaces (UIs): Body selection, gesture learning, gesture editing, and gesture recording.

Related Works
Generic Gesture Learning and Recognition Framework
Overview of the Proposed Generic Gesture Recognition and Learning Framework
Gesture Editing Figure
Gesture
Gesture Learning Stage
Generic Gesture Learning and Recognition Approach
Generic Gesture Learning and Recognition Overview
Implementation of User Interface
Implementation
11. Gesture
Experiments
Performance Show
Both hands hands
Result
Testing
Participants type
Editing
Gesture Learning Stage Result
Gesture Recognition Stage Result
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.