Abstract

Visual perception provides state information of current manipulation scene for control system, which plays an important role in on-orbit service manipulation. With the development of deep learning, deep convolutional neural networks (CNNs) have achieved many successful applications in the field of visual perception. Deep CNNs are only effective for the application condition containing a large number of training data with the same distribution as the test data; however, real space images are difficult to obtain during large-scale training. Therefore, deep CNNs can not be directly adopted for image recognition in the task of on-orbit service manipulation. In order to solve the problem of few-shot learning mentioned above, this paper proposes a knowledge graph-based image recognition transfer learning method (KGTL), which learns from training dataset containing dense source domain data and sparse target domain data, and can be transferred to the test dataset containing large number of data collected from target domain. The average recognition precision of the proposed method is 80.5%, and the average recall is 83.5%, which is higher than that of ResNet50-FC; the average precision is 60.2%, and the average recall is 67.5%. The proposed method significantly improves the training efficiency of the network and the generalization performance of the model.

Highlights

  • On-orbit service (OOS) manipulation is the terminal operation that service spacecraft carrying out on target spacecraft after their rendezvous and docking

  • The representation learning module based on convolutional neural networks (CNNs) uses convolution operation to extract image features hierarchically, while the classifier learning module based on knowledge graph adopts Graph Convolutional Networks (GCNs) to update the classifier nodes in the knowledge graph integrated with prior semantic relations

  • Simulation environment is set as the source domain defined in the problem description, and the ground physical environment is set as the target domain in the problem description

Read more

Summary

Introduction

On-orbit service (OOS) manipulation is the terminal operation that service spacecraft carrying out on target spacecraft after their rendezvous and docking. The visual perception method based on CNNs can provide the pose information of the target in the current scene for the traditional control method but can be used as a part of deep reinforcement learning (DRL) to extract more abstract features through the agent-environment interaction. Under the premise of ensuring recognition accuracy, the proposed method significantly reduces the amount of real images needed in the training process, which rather relies on images collected in a simulation environment easier to be obtained with low cost It improves training efficiency and generalization of the model, and it is of great significance for studying how to transfer the target recognition model learned on the ground to space

Description and Definition of Image Recognition Transfer Learning
Knowledge Graph-Based Image Recognition Transfer Learning Method
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call