Abstract

Augmented reality assisted assembly training (ARAAT) is an effective and affordable technique for labor training in the automobile and electronic industry. In general, most tasks of ARAAT are conducted by real-time hand operations. In this paper, we propose an algorithm of dynamic gesture recognition and prediction that aims to evaluate the standard and achievement of the hand operations for a given task in ARAAT. We consider that the given task can be decomposed into a series of hand operations and furthermore each hand operation into several continuous actions. Then, each action is related with a standard gesture based on the practical assembly task such that the standard and achievement of the actions included in the operations can be identified and predicted by the sequences of gestures instead of the performance throughout the whole task. Based on the practical industrial assembly, we specified five typical tasks, three typical operations, and six standard actions. We used Zernike moments combined histogram of oriented gradient and linear interpolation motion trajectories to represent 2D static and 3D dynamic features of standard gestures, respectively, and chose the directional pulse-coupled neural network as the classifier to recognize the gestures. In addition, we defined an action unit to reduce the dimensions of features and computational cost. During gesture recognition, we optimized the gesture boundaries iteratively by calculating the score probability density distribution to reduce interferences of invalid gestures and improve precision. The proposed algorithm was evaluated on four datasets and proved to increase recognition accuracy and reduce the computational cost from the experimental results.

Highlights

  • Industrial assembly is performed by grouping individual parts and fitting them together to create the finished commodities with great additional value

  • We developed an Augmented reality assisted assembly training (ARAAT) system to transform the complicated ARAAT task evaluation into a problem of gesture recognition and proposed a gesture recognition and prediction algorithm

  • We built a complicated ARAAT task model where a task is decomposed into a series of hand operations and each hand operation is further decomposed into several continuous actions corresponding to gestures

Read more

Summary

Introduction

Industrial assembly is performed by grouping individual parts and fitting them together to create the finished commodities with great additional value. Precise gesture recognition plays an important role in the bare-hand ARAAT, and in evaluating the standard and achievement of training tasks. There is a problem that current studies for bare-hand ARAAT mostly focus on the single gesture recognition rather than the whole assembly task evaluation. Based on the related ARAAT studies mentioned far, ARAAT has the following areas of improvement: (1) lack of the whole complex assembly task evaluation, (2) limitation of interaction gestures, (3) low recognition accuracy, and (4) unnatural interaction experiences resulting from long response time. After recognition with the optimal gesture boundaries, the gesture results will be used to evaluate the standard and achievement of hand operations in ARAAT tasks. For evaluating the standard and achievement of hand operations in ARAAT tasks, an algorithm for gesture recognition is proposed in this paper to improve recognition accuracy and efficiency. The subsequent sections of this paper are divided as follows: Section 2 describes the modeling for ARAAT; Section 3 presents the action categories, action recognition, and operation prediction; Section 4 details the experimental results compared with other algorithms on a homemade dataset and the experimental analysis; Section 5 provides a short conclusion and suggestions for future research

Modeling for Augmented Reality Assisted Assembly Training
Dynamic Gesture Recognition in Augmented Reality Assisted Assembly Training
Action Categories
Feature Extraction
Gesture Classification
Boundary Segmentation
Action and Operation Prediction
Experimental Design and Datasets
Method
Homemade Datasets
Result
Findings
Result of Operation Recognition and Prediction
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.