As space technology advances, an increasing number of spacecrafts are being launched into space, making it essential to monitor and maintain satellites to ensure safe and stable operations. Acquiring 3D information of space targets enables the accurate assessment of their shape, size, and surface damage, providing critical support for on-orbit service activities. Existing 3D reconstruction techniques for space targets, which mainly rely on laser point cloud measurements or image sequences, cannot adapt to scenarios with limited observation data and viewpoints. We propose a novel method to achieve a high-quality 3D reconstruction of space targets. The proposed approach begins with a preliminary 3D reconstruction using the neural radiance field (NeRF) model, guided by observed optical images of the space target and depth priors extracted from a customized monocular depth estimation network (MDE). A NeRF is then employed to synthesize optical images from unobserved viewpoints. The corresponding depth information for these viewpoints, derived from the same depth estimation network, is integrated as a supervisory signal to iteratively refine the 3D reconstruction. By exploiting MDE and the NeRF, the proposed scheme iteratively optimizes the 3D reconstruction of spatial objects from seen viewpoints to unseen viewpoints. To minimize excessive noise from unseen viewpoints, we also incorporate a confident modeling mechanism with relative depth ranking loss functions. Experimental results demonstrate that the proposed method achieves superior 3D reconstruction quality under sparse input, outperforming traditional NeRF and DS-NeRF models in terms of perceptual quality and geometric accuracy.
Read full abstract7-days of FREE Audio papers, translation & more with Prime
7-days of FREE Prime access