Abstract

Action recognition has been gaining interest in research due to its great number of applications. So, the main contribution of this manuscript is a human–robot interaction framework, relying on dimension reduction of the system’s inputs in order to require a smaller dataset for training of an Artificial Neural Network. Our motivation is the development of a Social Assistive Robotics application. In summary, we choose nine standard actions to guide a robot, and two neutral ones to represent stand-by or resting cases. The dataset is created by people with different body shape, for robustness purposes, using only 5 to 10 samples of each class per person. Offline and online tests validate the method’s accuracy and confusion matrices clarify the results. A TicTacToe game using a ground robot exemplify a real world application, where each action represents a desired spot in the game. The results confirm a high accuracy, above 96.7%, in all the tests. Based on this, we can conclude our preprocessing strategy and classifier are capable of identifying the action patterns, even for a tiny dataset; thus, it is recommended for educational proposals due to its simplicity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call