Abstract
Multiple-object detection, localization, and tracking are desirable in many areas and applications, as the field of deep learning has developed and has drawn the attention of academics in computer vision, having a plethora of networks now achieving excellent accuracy in detecting multiple objects in an image. Tracking and localizing objects still remain difficult processes which require significant effort. This work describes an optical camera-based target detection, tracking, and localization solution for Unmanned Aerial Vehicles (UAVs). Based on the well-known network YOLOv4, a custom object detection model was developed and its performance was compared to YOLOv4-Tiny, YOLOv4-608, and YOLOv7-Tiny. The target tracking algorithm we use is based on Deep SORT, providing cutting-edge tracking. The proposed localization approach can accurately determine the position of ground targets identified by the custom object detection model. Moreover, an implementation of a global tracker using localization information from up to four UAV cameras at a time. Finally, a guiding approach is described, which is responsible for providing real-time movement commands for the UAV to follow and cover a designated target. The complete system was evaluated in Gazebo with up to four UAVs utilizing Software-In-The-Loop (SITL) simulation.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.