Abstract

The observation, decomposition and record of motion are usually accomplished through artificial means during the process of motion analysis. This method not only has a heavy workload, its efficiency is also very low. To solve this problem, this paper proposes a novel method to segment and recognize continuous human motion automatically based on machine vision for mechanical assembly operation. First, the content-based dynamic key frame extraction technology was utilized to extract key frames from video stream, and then automatic segmentation of action was implemented. Further, the SIFT feature points of the region of interest (ROIs) were extracted, on the basis of which the characteristic vector of the key frame was derived. The feature vector can be used not only to represent the characteristic of motion, but also to describe the connection between motion and environment. Finally, the classifier is constructed based on support vector machine (SVM) to classify feature vectors, and the type of therblig is identified according to the classification results. Our approach enables robust therblig recognition in challenging situations (such as changing of light intensity, dynamic backgrounds) and allows automatic segmentation of motion sequences. Experimental results demonstrate that our approach achieves recognition rates of 96.00 % on sample video which captured on the assembly line.

Highlights

  • Gilbreth (1917) said that the world’s largest waste is the waste of motion

  • The feature points of region of interest (ROIs) in key frames were obtained based on the scale invariant feature transform (SIFT) feature points of sample images, and the feature vectors of key frames were acquired by calculating the displacement vector between feature point sets

  • (1) The relationship between motion and objects were established by using the displacement vector between SIFT points in different ROIs; (2) In order to improve the timeliness of the proposed method, the key frame extraction technology was applied to reduce the number of images to be processed, and the image processing technique was applied to reduce the number of pixels to be processed; (3) The proposed motion recognition algorithm was constructed based on support vector machine (SVM) to classify feature vectors; (4) This method accomplishes the motion segmentation, recognition and record automatically and reduces the workload of motion analysts and improves the efficiency of motion analysis

Read more

Summary

Background

Gilbreth (1917) said that the world’s largest waste is the waste of motion. we should find the issues of action and improve workers’ movement through motion analysis, thereby eliminating the waste of time, alleviating fatigue and improving work efficiency (Salvendy 2001; Florea et al 2003). In order to solve the first problem, the proposed method can segment motion in continuous video using dynamic key frame extraction technology based on content. This paper presents an automated segmentation and recognition method that effectively accomplishes the observation, decomposition and record of human motion using SVM and machine vision in assembly environment. A novel content-based dynamic key frame extraction algorithm is implemented as follow: Assume that the video stream contains S images, and every frame image has P1 × P2 pixels. Proposed motion recognition algorithm First of all, the multiple displacement vector sets can be obtained by calculating the displacement between feature points of different ROIs, and the displacement vector sets are the feature vectors of key frame. Get feature vectors from key frame The ROIs of image consist of human hands and two workpieces in mechanical product assembly process.

Experimental procedure:
Methods
Findings
Conclusions
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.