Abstract

Human gestures, a fundamental trait, enable human-machine interactions and possibilities in interfaces. Amid technological advancements, gesture recognition research has gained prominence. Gesture recognition possesses merits in sample acquisition and intricate delineation. Delving into its nuances remains significant. Existing techniques leverage PC-based OpenCV and deep learnings computational prowess, showcasing complexity. This scholarly exposition outlines an experimental framework, centered on mobile FPGA for enhanced gesture recognition. The focus lies on DE2-115 as an image discernment base. A 51 microcontroller supports auditory synthesis. Specifically, this paper highlights the DE2-115 FPGA as the foundation for image discernment, while a 51 microcontroller assists in auditory synthesis. Our emphasis lies in recognizing basic gestures, particularly within the rock-paper-scissors taxonomy, to ensure precision and accuracy. This research underscores the potential of FPGA in enabling efficient gesture recognition on mobile platforms. As a result, the experiments conducted in this thesis successfully realize the recognition of simple gestures, such as the numbers 1, 2, 3, and 4, as well as rock-paper-scissors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call