Abstract

Artificial intelligence innovations hold promise for improving military operations while gestures have been the simplest and most powerful medium for communications. This idea proposes a system identifying hand movements and translating them to spoken words, where soldiers on battlegrounds may readily converse with one another. We utilise computational vision, Haar cascade classifiers, CNN, MediaPipe and other approaches for thought patterns through results. There are three techniques in this process: firstly, hand identification system that provides borders around the hand placed to another screen fixing image magnification using OpenCV and Matplot; following with a hand-held skeleton-projected connection model using MediaPipe's mapping system libraries in real-time capturing at 30 fps; lastly, sound element being introduced in each class improving recognition of the gestures. Dataset of around 10,000 images across five different classes are built with CNN architecture used to classify. This research shows results to discover AI options for military applications.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call