Abstract
This article presents a system for controlling quadcopters with gestures, which are recognized by a model based on neural networks. A method based on a combined deep learning model is proposed that provides real-time recognition with minimal consumption of computing power. An implementation is presented that offers the possibility of controlling the quadcopter in two ways, via gestures or the keyboard. A functionality is also provided for adding new gestures for recognition using interactive code via the Jupyter Lab web application. A special mode is implemented that allows us to create a data set for a new test directly from the quadcopter camera to simplify data collection. The operation of the control and recognition module is demonstrated using an example in which a DJI Tello Edu drone is controlled. The results of tests under real conditions are presented. The developed software allows one to speed up the process of gesture recognition and facilitates the process of controlling the quadcopters. Several areas of improvement of the developed system and their possible technical implementation are proposed.
Published Version
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have