Abstract

Manipulative action recognition is one of the most important and challenging topic in the fields of image processing. In this paper, three kinds of sensor modules are used for motion, force and object information capture in the manipulative actions. Two fusion methods are proposed. Further, the recognition accuracy can be improved by using object as context. For the feature-level fusion method, significant features are chosen first. Then the Hidden Markov Models are built with these selected features to characterize the temporal sequence. For the decision-level fusion method, HMMs are built for each feature group. Then the decisions are fused. On top of these two fusion methods, the object/action context is modeled using Bayesian network. Assembly tasks are used for algorithm evaluation. The experimental results prove that the proposed approach is effective on manipulative action recognition task. The recognition accuracy of the decision-level, feature-level fusion methods and the Bayesian model are 72%, 80% and 90% respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call