Abstract

The autonomous flight of an unmanned aerial vehicle refers to creating a new flight route after self-recognition and judgment when an unexpected situation occurs during the flight. The unmanned aerial vehicle can fly at a high speed of more than 60 km/h, so obstacle recognition and avoidance must be implemented in real-time. In this paper, we propose to recognize objects quickly and accurately by effectively using the H/W resources of small computers mounted on industrial unmanned air vehicles. Since the number of pixels in the image decreases after the resizing process, filtering and object resizing were performed according to the altitude, so that quick detection and avoidance could be performed. To this end, objects up to 60 m in height were classified by subdividing them at 20 m intervals, and objects unnecessary for object detection were filtered with deep learning methods. In the 40 m to 60 m sections, the average speed of recognition was increased by 38%, without compromising the accuracy of object detection.

Highlights

  • The autonomous flight of an Unmanned Aerial Vehicle (UAV) refers to creating a new flight route after self-recognition and judgment when an unexpected situation occurs during the flight

  • As they were photographed from the perspective of drones, many objects were detected at once, and in particular, the recognition of objects as trees, apartment complexes, and general housing complexes accounted for the majority

  • When the background was overlapping at a long distance, the accuracy may suffer, but it did not prevent the drones from recognizing objects in front of them, indicating that autonomous flight was possible

Read more

Summary

Introduction

Recognition during AutonomousThe autonomous flight of an Unmanned Aerial Vehicle (UAV) refers to creating a new flight route after self-recognition and judgment when an unexpected situation occurs during the flight. In the case of the UAV, the recognition and judgment of the UAV are used to measure the distance or avoid obstacles through object recognition by receiving data from various sensors such as a single camera [1–3], a depth camera [4–6], LiDAR [7,8], an ultrasonic sensor [9,10], or radar [11]. Vision-based, ultrasonic, and infrared sensors are some of these, and if they fail due to the temperature or illumination circumstances, they might result in collisions. Highly reliable industrial drones typically utilize a complicated sensor strategy. As computer vision and machine learning techniques progress, visionsensor-based autonomous flight is becoming increasingly widespread. The drone sends data from sensors such as the three-axis gyroscope, three-axis accelerometer, magnetometer, GNSS, or barometer to the Flight Control Computer (FCC), which calculates the drone’s posture and keeps it stable in the air [12]. A separate Companion Computer (CC) is mounted to perform computing for image processing and autonomous flight

Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.