Abstract

Landing an Unmanned Aerial Vehicle (UAV) aboard a patrol boat is a challenging task due to the unpredictable ship movement, being its automation essential. Automated solutions rely on the UAV pose estimation, usually obtained from onboard sensors. Given the onboard sensors’ limitations and the power consumption, we propose an off-board pose estimation method. By relying on RGB images captured from a camera at the ship deck, our method directly estimates the UAV pose with respect to the landing site, removing the dependency on any additional sensors. We propose a model-based pose tracking method with a Rao-Blackwellized Particle Filter (RBPF), that models the translational motion of the UAV approximating the translation by a set of hypotheses and a distinct rotational distribution for each translation hypothesis using a autoencoder network trained for our UAV model. This allows to reduce the sample search space from 6D to 3D. Furthermore, we propose a particle weighting process combining the contributions from the rotation likelihood distribution and a detector-based likelihood. The training of the neural networks and the validation of the proposed method is made on a graphically realistic simulator. The results show that our weighting process has benefits when compared to the baseline and other state-of-the-art approaches. Furthermore, our approach successfully handles objects with geometric symmetries.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call