Abstract

Target object detection based on deep learning and image processing technology is widely used in robot astronauts. However, existing detection methods have limitations in detection speed and accuracy due to the complex space environment (such as uneven illumination and particle radiation) and inadequate training samples. Inspired by the neural network structure and transfer learning, a deep learning detection method for small samples in complex environments is proposed. A depthwise separable convolution is added to a feature fusion network to reduce the number of parameters in image output feature mapping, and a linear bottleneck inverted residual structure is introduced into a backbone feature extraction network to reduce the computation and memory requirements during feature extraction. As a result, a backbone feature extraction–fusion network structure is established to solve the problem in detection speed. A squeeze-and-excitation (SE) attention module is introduced in front of the head, and an SE detector is constructed to improve the detection accuracy in a spatially complex environment by dynamically assigning image channel weights to highlight the target object features in blurred images. The learning efficiency and accuracy of the network model in the small sample case in this paper are addressed by incorporating the transfer learning idea and establishing the evaluation function of learning samples. Experimental results show that the proposed algorithm enables astronaut robots to detect object rapidly and accurately in complex environments. The average speed (frames per second) and accuracy of detection under 2200 training samples are 45.19 and 93.14%, respectively.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.