Abstract

In this study, we present a novel approach to derive hand poses from data input via a haptic controller, leveraging machine learning techniques. The input values received from the haptic controller correspond to the movement of five fingers, each assigned a value between 0.0 and 1.0 based on the applied pressure. The wide array of possible finger movements requires a substantial amount of motion capture data, making manual data integration difficult. This challenge is primary due to the need to process and incorporate large volumes of diverse movement information. To tackle this challenge, our proposed method automates the process by utilizing machine learning algorithms to convert haptic controller inputs into hand poses. This involves training a machine learning model using supervised learning, where hand poses are matched with their corresponding input values, and subsequently utilizing this trained model to generate hand poses in response to user input. In our experiments, we assessed the accuracy of the generated hand poses by analyzing the angles and positions of finger joints. As the quantity of training data increased, the margin of error decreased, resulting in generated poses that closely emulated real-world hand movements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call