Abstract

Missions requiring autonomous, close-proximity operations of spacecraft, such as On-Orbit Servicing, On-Orbit Assembly and Active Debris Removal, have become a thriving topic in the aerospace research community over the last decades, not only from an economic, operative, and scientific perspective, but also as a mean of ensuring the sustainability of the space environment. These operations involve a variety of technological challenges, most of which are related to the need of autonomous and safe Guidance, Navigation and Control systems. Since the future of these mission scenarios is strictly tied to spacecraft standardisation and modularity, relative navigation employing monocular cameras on servicing platforms to approach targets equipped with artificial markers for pose estimation purposes has drawn great attention. Following this trend, this paper presents an original vision-based pose estimation architecture for relative navigation with respect to passively cooperative targets equipped with ArUco markers. The proposed architecture foresees two operative modes, namely Acquisition and Tracking. The first features ArUco's detection through hue-saturation-value image representation, their identification by reading their built-in code and the computation of the pose without a-priori knowledge. The second, instead, takes advantage of prior pose estimates to speed up the entire processing pipeline. Performance is assessed through an extensive numerical simulation campaign, considering as test scenario the final approach phase of a rendezvous manoeuvre to reach a satellite belonging to a large constellation in Low Earth Orbit, and using the Planet and Asteroid Natural scene Generation Utility tool for realistic synthetic image generation. The dedicated tests on the Acquisition mode show that successful marker detection and pose initialization is achieved from up to 99.76% of the possible relative position and attitude states of the chaser with respect to the target at beginning of the final approach trajectory. As the chaser gets closer to the target, results highlight significant robustness of both operative modes against illumination conditions and uncertainties in the knowledge of camera intrinsic parameters. Overall, the architecture shows pose estimation accuracies up to millimetric and sub-degree levels.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call