Abstract

Actively tracking an arbitrary space noncooperative object relied on visual sensor remains a challenging problem. In this article, we provide an open-source benchmark for space noncooperative object visual tracking including simulated environment, evaluation toolkit, and a position-based visual servoing (PBVS) baseline algorithm, which can facilitate the research in this topic especially for those methods based on deep reinforcement learning. We also present an end-to-end active visual tracker based on deep Q-learning, named as DRLAVT, which learns approximately optimal policy merely took color or RGBD images as input. To the best of authors knowledge, it is the first intelligent agent used for active visual tracking in aerospace domain. The experiment results show that our DRLAVT achieves an excellent robustness and real-time performance compared with the PBVS baseline, benefitted from the design of complex neural network and efficient reward function. In addition, the multiple targets training adopted in this article effectively guarantees the transferability of DRLAVT by forcing the agent to learn optimal control policy with respect to motion patterns of the target.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call