AbstractSafe Micro Aerial Vehicle (MAV) navigation requires detecting and avoiding obstacles. For safe MAV navigation, expansion‐based algorithms are effective for detecting obstacles. However, accurate and real‐time obstacle detection is a fundamental challenge. Some traditional methods focus on extracting geometric features from images and applying geometric constraints to identify potential obstacles. Others may leverage machine learning algorithms for object detection and classification, using features such as texture, shape, and context to distinguish obstacles from background clutter. The choice of approach depends on factors such as the specific requirements of the application, the complexity of the scene, and the available computational resources. Since obstacles, in reality, take the form of objects (e.g., persons, walls, pillars, trees, automobiles, and other structures), it is preferable to represent them according to human comprehension and as objects. Therefore, the objective of this study is to reflect on the previous research and address the issues mentioned above by extracting objects from the fisheye image using a panoptic deep‐learning network. The extracted object regions are, then, used to identify obstacles with a novel area‐based expansion rate we developed in a previous study. We compared the accuracy of obstacle detection in our proposed method to the existing method when moving forward and to the right; thus, we improved it between 10% and 18%, respectively. In addition, compared with the existing method, and due to replacing a single object with multiple regions, obstacle‐detection runtime for forward and right direction is 15.71 and 25.5 times faster, respectively, and the required match points have decreased by 49% and 55%.
Read full abstract