In order to contribute to the operation of unmanned aerial vehicles (UAVs) according to visual flight rules (VFR), this article proposes a monocular approach for cloud detection using an electro-optical sensor. Cloud avoidance is motivated by several factors, including improving visibility for collision prevention and reducing the risks of icing and turbulence. The described workflow is based on parallelized detection, tracking and triangulation of features with prior segmentation of clouds in the image. As output, the system generates a cloud occupancy grid of the aircraft’s vicinity, which can be used for cloud avoidance calculations afterwards. The proposed methodology was tested in simulation and flight experiments. With the aim of developing cloud segmentation methods, datasets were created, one of which was made publicly available and features 5488 labeled, augmented cloud images from a real flight experiment. The trained segmentation models based on the YOLOv8 framework are able to separate clouds from the background even under challenging environmental conditions. For a performance analysis of the subsequent cloud position estimation stage, calculated and actual cloud positions are compared and feature evaluation metrics are applied. The investigations demonstrate the functionality of the approach, even if challenges become apparent under real flight conditions.
Read full abstract