This paper aims to present a deep learning-based pipeline for estimating the pose of an uncooperative target spacecraft from a single grayscale monocular image. The possibility of enabling autonomous vision-based relative navigation in close proximity to a noncooperative resident space object would be especially appealing for mission scenarios such as on-orbit servicing and active debris removal. The relative pose estimation pipeline proposed in this work leverages state-of-the-art convolutional neural network (CNN) architectures to detect the features of the target spacecraft using monocular vision. Specifically, the overall pipeline is composed of three main subsystems. The input image is first processed using an object detection CNN that localizes the bounding box enclosing our target. This is followed by a second CNN that regresses the location of semantic key points of the spacecraft. Eventually, a geometric optimization algorithm exploits the detected key-point locations to solve for the final relative pose. The proposed pipeline demonstrated centimeter-/degree-level pose accuracy on the spacecraft pose estimation dataset (SPEED), along with considerable robustness to changes in illumination and background conditions. In addition, the architecture showed to generalize well on real images, despite having exclusively exploited synthetic data from the SPEED to train the CNNs.
Read full abstract