Abstract

Identifying hand configuration is a critical feature of sign language translation. In this paper, we describe our approach to recognize hand configurations in real time with the purpose of providing accurate predictions to be used in automatic sign language translation. To capture the hand configuration we rely on data gloves with 14 sensors that measure finger joints bending. These inputs are sampled at a frequency of 100Hz and fed to a classifier that predicts the current hand configuration. The classification model is created from an annotated sample of hand configurations previously acquired. We expect this approach to be accurate and robust in the sense that the performance of the classification model should not vary significantly when the classifier is being used by one or another user. The results from our experimental evaluation show that there is a very high accuracy, meaning that data gloves are a good approach to capture the descriptive features of hand configurations. However, the robustness of such an approach is not as good as desirable since the accuracy of the classifier depends on the user, i.e., the accuracy is high when the classifier is used by a user who trained it but decreases in other cases.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.