Abstract

Autonomous drone navigation has evolved significantly as a result of the integration of cutting-edge computer vision and reinforcement learning algorithms. In this study, we propose a comprehensive framework that combines the adaptability of the TD3 neural network, the agility of the Ros2 platform, and the resilience of the YOLOv3 object detection model. Making use of YOLOv3's real-time object detection capabilities, our system proves to be highly adept at recognizing and reacting to changing environmental impediments, which guarantees improved UAV situational awareness. When combined with the TD3 neural network, the Ros2-based navigation framework allows for quick decision- making, which allows the drone to move independently toward predetermined destination points while effectively avoiding obstructions. Our work demonstrates the effectiveness of our suggested strategy through thorough experimentation in simulated situations, highlighting its potential for practical implementation in a variety of applications, from autonomous delivery services to aerial surveillance. The outcomes demonstrate the integrated system's flexibility and resilience, highlighting its capacity to successfully handle challenging navigational situations and guarantee secure and effective aerial operations. Our study adds to the field of autonomous drone navigation by highlighting the vital role that integrated computer vision and machine learning techniques play in enabling smart and flexible unmanned aerial vehicle operations. Keywords— Autonomous drone, YOLOv3, Object detection, Ros2, TD3, Reinforcement learning, Simulated environment, Aerial navigation, obstacle avoidance, Computer vision, Unmanned aerial vehicles (UAVs),Goal-point navigation, Aerial surveillance

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call