Abstract

In this paper, the relative navigation problem of a large–scale free tumbling non–cooperative target is investigated. Given that the target cannot be completely captured by navigation sensors in an ultra–short range due to its large size, a hybrid relative navigation algorithm is proposed based on the data fusing principle of monocular cameras and Lidar sensors. First, an algorithm of vision aided point cloud segmentation was developed to improve the data processing efficiency of Lidar. Next, an attitude determination method was proposed for the target via a pure geometrical approach, which is robust to sensor errors. Then, a target geometrical recognition was performed based on a simultaneous localization and mapping (SLAM) approach. Finally, the feasibility and effectiveness of the proposed methods were verified by numerical simulations, indicating that the new relative navigation framework is capable of more accurate attitude tracking of the target with synchronous geometrical recognition than classic attitude tracking methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call