Abstract

Spacecraft pose estimation (SPE) plays a vital role in the relative navigation system for on-orbit servicing and active debris removal. Current deep learning-based methods have made great achievements on object pose estimation. However, towards the challenging onboard SPE missions, most existing Convolutional Neural Network (CNN) methods failed to capture remote vision attention, leading to the reduction of accuracy and robustness. In this paper, we presented an end-to-end multi-task Pyramid Transformer SPE network (PVSPE) consisting of two novel feature extraction modules: EnhancedPVT (EnPVT) and SlimGFPN. The EnPVT module is designed to combine global spatial and channel attention, while the Slim GFPN module can fuse features more effectively. Matrix Fisher and multivariate Gaussian distributions are further employed to model the uncertainty of pose regression to increase its accuracy. Extensive experiments are carried out on challenging SPEED + and SHIRT datasets, to validate the performances on pose estimation and vision-based navigation, respectively. The results show that the proposed PVSPE model achieved high accuracy for SPE on the SPEED + dataset even under different scales and severe illumination, demonstrating its robustness and high generalization. Leveraging the insightful uncertainty model of PVSPE, the vision-based navigation pipeline, combined with Kalman filters, accurately estimated the satellite pose under challenging rendezvous scenarios on the SHIRT dataset, with degree-level attitude errors and centimeter-level translation accuracy at steady-state.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call