Abstract

In this paper, a novel 3D reconstruction framework is proposed to recover the 3D structural model of a space object from its multi-view images captured by a visible sensor. Given an image sequence, this framework first estimates the relative camera poses and recovers the depths of the surface points by the structure from motion (SFM) method, then the patch-based multi-view stereo (PMVS) algorithm is utilized to generate a dense 3D point cloud. To resolve the wrong matches arising from the symmetric structure and repeated textures of space objects, a new strategy is introduced, in which images are added to SFM in imaging order. Meanwhile, a refining process exploiting the structural prior knowledge that most sub-components of artificial space objects are composed of basic geometric shapes is proposed and applied to the recovered point cloud. The proposed reconstruction framework is tested on both simulated image datasets and real image datasets. Experimental results illustrate that the recovered point cloud models of space objects are accurate and have a complete coverage of the surface. Moreover, outliers and points with severe noise are effectively filtered out by the refinement, resulting in an distinct improvement of the structure and visualization of the recovered points.

Highlights

  • Detecting, tracking, cataloging and identifying man-made objects orbiting Earth comprise one of the fundamental requirements of space surveillance [1]

  • We address the problem of recovering space object models by reconstruction from images captured by space-based visible sensors

  • We proposed a reconstruction framework for recovering the structure models of space objects using a visible sensor

Read more

Summary

Introduction

Detecting, tracking, cataloging and identifying man-made objects orbiting Earth comprise one of the fundamental requirements of space surveillance [1]. Zhang et al [7] proposed an improved pose estimation algorithm based on 2D-3D correspondences between the input image and the 3D model of space objects. To handle the tasks of both recognition and pose estimation in one vision-based framework, methods based on kernel regression [12] and homeomorphic manifold analysis [13] were proposed. All of these methods require, directly or indirectly, the prior 3D models of space objects. We address the problem of recovering space object models by reconstruction from images captured by space-based visible sensors.

Overview of the Framework
Feature Point Detecting and Matching
Structure from Motion
Dense Reconstruction
Point Cloud Refinement
Refinement for Planes
Refinement for Rotationally-Symmetric Structures
Experiments and Analyses
Data Collection
Parameter Determination for Point Cloud Refinement
Reconstruction of Objects with Basic Geometry
Reconstruction of Space Object
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call