Abstract

The actual task of controlling a group of multicopters performing coordinated actions and are locating at short distances from each other, cannot be performed with the help of a standard on-board autopilot on GPS or GLONASS signals, which give large errors.
 The solution to this problem is possible due to additional equipment that allows you to set the distance between the multicopters and their relative position. To do this, it is proposed to mark each multicopter with an image label in the form of a standard geometric figure or a geometric body of a given color and size, and to use technical vision system and image recognition algorithms.
 The structure of the technical vision system for the multicopter was developed and algorithms for image processing and calculation of the change of coordinates of the neighboring multicopter, which are transmitted to the control system to introduce the necessary motion correction, were proposed.
 The method to identify the reference object in the image of the scene by its color was used in this work. This method is very effective compared to other methods, because it requires only one pass per pixel, which gives a significant advantage in speed during video stream frame processing. RGB color model with a color depth of 24-bit was chosen based on the analysis. Since the lighting during the flight can change, the color is set by the limits of change of the components R, G, B.
 To determine the distance between multicopters, a very simple but effective method of determination the area of the recognition object (labels on the neighboring multicopter) with next comparation it with the actual value is used. Since the reference object is artificial, its area can be specified with high accuracy.
 The offset of the center of the object from the center of the frame is used to calculate the other two coordinates.
 In the beginning, the specific camera instance is calibrated both for a known value of the area of the object and for its displacement along the axes relative to the center of the frame.
 The technical vision system model in the Simulink software environment of the Matlab system was created to test the proposed algorithms. Based on the simulation results in Simulink, you can generate code in the C programming language for further implementation of the system in real time.
 A series of studies of the model was conducted using a Logitech C210 webcam with a 0.3 megapixel photo matrix (640x480 resolution). According to the results of the experiment, it was found that the maximum relative error in determining the coordinates of the multicopter did not exceed 6.8 %.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call