Abstract

Accurately recognizing gesture in real time is a complex problem to solve. It can be addressed to some extent using colored gloves and markers, but models so built do not generalize well. Another option is to utilize boundary based gesture recognition, but these are more complex to build and prediction is time consuming. Through this work we try to resolve both these issues by construction of a straightforward machine learning architecture which accurately implements gesture recognition in real time and can also be utilized to build a generalized model. The solution we have developed is based on the premise that an architecture which can contextually recognize gestures and can also adaptively identify variations to the same can be used for solving problems ranging from body language detection, translating sign language to devising an intuitive method for controlling UAVs (Unmanned Aerial Vehicles).

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.