Abstract

Imaging cameras are cost-effective sensors for spacecraft navigation. Image-driven techniques to extract the target spacecraft from its background are efficient and do not require pretraining. In this paper, we introduce several image-driven foreground extraction methods, including combining the difference of Gaussian-based scene detection and graph manifold ranking-based foreground saliency generation. We successfully apply our foreground extraction method on infrared images from the STS-135 flight mission captured by the space shuttle’s Triangulation and LIDAR Automated Rendezvous and Docking System (TriDAR) thermal camera. Our saliency approach demonstrates state-of-the-art performance and provides an order of magnitude reduction in processing speed from the traditional methods. Furthermore, we develop a new uncooperative spacecraft pose estimation method by combining our foreground extraction technique with the level-set region-based pose estimation with novel initialization and gradient descent enhancements. Our method is validated using synthetically generated Envisat, Radarsat model, and International Space Station motion sequences. The proposed process is also validated with real rendezvous flight images of the International Space Station.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call