Abstract

In order to enable the non-cooperative rendezvous, capture, and removal of large space debris, robust and fast tracking of the non-cooperative target is needed. This paper proposes an improved algorithm of real-time visual tracking for space non-cooperative target based on three-dimensional model, and it does not require any artificial markers. The non-cooperative target is assumed to be a 3D model known and constantly in the field of view of the camera mounted on the chaser. Space non-cooperative targets are regarded as less textured manmade objects, and the design documents of 3D model are available. Space appears to be black, so we can assume the object is in empty space and only the object is visible, and the background of the image is dark. Due to edge features offer a good invariance to illumination changes or image noise, our method relies on monocular vision and uses 3D-2D correspondences between the 3D model and its corresponding 2D edges in the image. The paper proposes to remove the sample points that are susceptible to false matches based on geometrical distance due to perspective projection of the 3D model. To allow a better robustness, we compare the local region similarity to get better matches between sample points and edge points. Our algorithm is proved to be efficient and shows improved accuracy without significant computational burden. The results show potential tracking performance with mean errors of < 3 degrees and < 1.5% of range.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call