Abstract

The scientific research strength in the aerospace field has become an essential criterion for measuring a country’s scientific and technological level and comprehensive national power, but in the grand scheme, many factors are beyond human control. As is known, the difficulty with non-cooperative target intersection docking is its failure to provide attitude information autonomously. The existing non-cooperative target poses estimation methods with low accuracy and high resource consumption. This paper proposes a deep-learning-based pose estimation method for solving these problems. The proposed pose estimation method consists of two distinctly innovative works. You Only Look Once v5 (YOLOv5) is an innovative and lightweight network that is used to pre-recognize non-cooperative targets. Another part introduces concurrent space and channel compressor theory modules in a lightweight High-Resolution Network (HRNet) to extend its advantages in real-time, and hence proposes a spatial and channel Squeeze and Excitation—Lightweight High-Resolution Network (scSE-LHRNet) network for pose estimation. To verify the superiority of the proposed network, experiments were conducted on a publicly available dataset with multiple evaluation metrics to compare and analyze existing methods. The experimental results show that the proposed pose estimation method dramatically reduces the complexity of the model, effectively decreases the amount of computation, and achieves significant pose estimation results.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.