Abstract
Rotor-craft is a kind of VTOL UAV and is widely used in multiple fields. Among relative researches, vision guided autonomous landing of rotor-craft has been a hot spot, where vision positioning is the most crucial. The core of the algorithm is to calculate the position and attitude information according to the change of the visual image of the same object at different time or frame. Based on the real-time self calibration technology of airborne monocular vision, this paper puts forward a method that takes the designed landing mark composed of black and white squares as the cooperative target, solving the relative pose of camera and cooperative target to carry out the landing positioning of UAV, and realizes the full-automatic sub-pixel precision linear edge detection to ensure the accuracy of visual positioning. A series of simulation experiments show that for 768 * 576 pixel image, when the camera is about 12m away from the target and the noise deviation reaches 3 pixels, the total time of edge extraction and calibration is 0.511s, and the position and attitude estimation are still satisfactory, which shows that the proposed algorithm can effectively realize the autonomous landing of UAV.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.