Abstract

In recent years, with the development of sensors, communication networks, and deep learning, drones have been widely used in the field of object detection, tracking, and positioning. However, there are inefficient task execution and some complex algorithms still need to rely on large servers, which is intolerable in rescue and traffic scheduling tasks. Designing fast algorithms that can run on the airborne computer can effectively solve the problem. In this paper, an object detection and location system for drones is proposed. We combine the improved object detection algorithm ST-YOLO based on YOLOX and Swin Transformer with the visual positioning algorithm and deploy it on the airborne end by using TensorRT to realize the detection and location of objects during the flight of the drone. Field experiments show that the established system and algorithm are effective.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.