Abstract

In the final approach stage of rendezvous and docking of a spacecraft, the pose parameters of the target spacecraft need to meet docking or berthing capture conditions. Visible light visual measurement systems are increasingly employed in spacecraft ground tests to extract the geometric features of spacecraft to calculate and verify the accuracy of pose parameters. Most current feature-segmentation algorithms are unable to break through the scale transformation problem of spacecraft movement and the noise interference of multi-layer insulation materials in imaging. To overcome these challenges, we propose a novel feature segmentation algorithm based on the framework of deep convolutional neural networks. Firstly, a full convolution model of the encoding-decoding structure is constructed based on data for the ground test. A feature concatenation module is applied and combined with a network backbone to improve the segmentation performance. Then, a comprehensive loss function is presented and optimized by the pose characteristics of the spacecraft in the approach phase. Furthermore, a specific spacecraft simulation dataset to train and test our segmentation model is built through data augmentation. The experimental results verify that the proposed method achieves accurate segmentation of spacecraft of different scales, suppresses the interference caused by multilayer insulation material, and has strong robustness against motion ambiguity. The pixel accuracy of our proposed method reaches 96.5%, and the mean intersection over union is 93.0%.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.