Abstract

Gesture-based control has gained prominence as an intuitive and natural means of interaction with unmanned aerial vehicles (UAVs). This paper presents a real-time gesture-based control system for UAVs that leverages the multimodal fusion of Frequency Modulated Continuous Wave (FMCW) radar and vision sensors, aiming to enhance user experience through precise and responsive UAV control via hand gestures. The research focuses on developing an effective fusion framework that combines the complementary advantages of FMCW radar and vision sensors. FMCW radar provides robust range and velocity measurements, while vision sensors capture fine-grained visual information. By integrating data from these modalities, the system achieves a comprehensive understanding of hand gestures, resulting in improved gesture recognition accuracy and robustness. The proposed system comprises three main stages: data acquisition, gesture recognition, and multimodal fusion. In the data acquisition stage, synchronized data streams from FMCW radar and vision sensors are captured. Then, machine learning algorithms are employed in the gesture recognition stage to classify and interpret hand gestures. Finally, the multimodal fusion stage aligns and fuses the data, creating a unified representation that captures the spatial and temporal aspects of hand gestures, enabling real-time control commands for the UAV. Experimental results demonstrate the system‘s effectiveness in accurately recognizing and responding to hand gestures. The multimodal fusion of FMCW radar and vision sensors enables a robust and versatile gesture-based control interface.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call