Abstract

The Unmanned Aerial Vehicle (UAV) is one of the most remarkable inventions of the last 100 years. Much research has been invested in the development of this flying robot. The landing system is one of the more challenging aspects of this system’s development. Artificial Intelligence (AI) has become the preferred technique for landing system development, including reinforcement learning. However, current research is more focused is on system development based on image processing and advanced geometry. A novel calibration based on our previous research had been used to ameliorate the accuracy of the AprilTag pose estimation. With the help of advanced geometry from camera and range sensor data, a process known as Inverse Homography Range Camera Fusion (IHRCF), a pose estimation that outperforms our previous work, is now possible. The range sensor used here is a Time of Flight (ToF) sensor, but the algorithm can be used with any range sensor. First, images are captured by the image acquisition device, a monocular camera. Next, the corners of the landing landmark are detected through AprilTag detection algorithms (ATDA). The pixel correspondence between the image and the range sensor is then calculated via the calibration data. In the succeeding phase, the planar homography between the real-world locations of sensor data and their obtained pixel coordinates is calculated. In the next phase, the pixel coordinates of the AprilTag-detected four corners are transformed by inverse planar homography from pixel coordinates to world coordinates in the camera frame. Finally, knowing the world frame corner points of the AprilTag, rigid body transformation can be used to create the pose data. A CoppeliaSim simulation environment was used to evaluate the IHRCF algorithm, and the test was implemented in real-time Software-in-the-Loop (SIL). The IHRCF algorithm outperformed the AprilTag-only detection approach significantly in both translational and rotational terms. To conclude, the conventional landmark detection algorithm can be ameliorated by incorporating sensor fusion for cameras with lower radial distortion.

Highlights

  • Introduction published maps and institutional affilLanding system design and Unmanned Aerial Vehicle (UAV) descending landmark detection have been the focus of many studies

  • We developed a sensor fusion technique that can improve AprilTag detection algorithms (ATDA) pose estimation in terms of rotational and translational estimation

  • The proposed IHCFR algorithm addresses the problem of pose estimation

Read more

Summary

Introduction

Landing system design and UAV descending landmark detection have been the focus of many studies. The prime aim of these efforts is to develop highly accurate and computationally lightweight algorithms to meet the needs of businesses and emergency. The main drawback of landing system development research far is that it has been centered around less complex landing platforms such as static or minimally fluctuating ship decks. There is a need to investigate landing surfaces with higher levels of motion complexity, such as a Stewart table, for pose estimation. The work presented here is a new range and camera sensor fusion technique applicable to complex landing tasks that addresses the pose estimation problem of a descending surface by utilizing

Objectives
Methods
Discussion
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call