Abstract

In recent years, numerous studies have applied deep reinforcement learning (DRL) algorithms to vision-guided unmanned aerial systems. However, DRL is not good at training deep networks in an end-to-end manner due to data inefficiency and lack of direct supervision signals. This paper provides a visual information dimension reduction scheme with representation learning as the visual perception module, which reduces the dimensions of high-dimensional visual information and retains its features related to UAV navigation. Combining such state representation learning with the DRL model can effectively reduce the complexity of the neural network required by DRL. Based on this scheme, we design three motion control models with a monocular camera as the main sensor and train them to control UAVs for obstacle avoidance tasks in a simulated environment. Experiments show that all these models achieve high obstacle avoidance ability after a certain period of training. In addition, one of them also enables the monocular vision guidance system to avoid obstacles in the blind spot of side vision.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call