Abstract

The drone sector is witnessing a surge in demand for advanced models tailored to address critical applications such as disaster management and intelligent warehouse deliveries. Employing simulation-based experiments with virtual drone navigation is considered a best practice before deploying physical models. Nonetheless, the current state-of-the-art virtual drone navigation system lacks accuracy and introduces notable increments in simulation time. In order to mitigate these issues, this paper introduces a deep reinforcement learning-based drone agent, designed to autonomously navigate within a constrained virtual environment. The proposed drone agent utilizes realistic drone physics in order to ensure flight within the virtual environment. The work uniquely combines & optimizes both control algorithms and physical dynamics, making the model more robust and versatile than others. The integration of curiosity-driven learning with physics-based modeling potentially increases the model's readiness for real-world application, compared to theoretical approaches. The extensive simulation results validate the remarkable speed and accuracy of the proposed scheme compared to baseline works. The trained agent exhibits strength and versatility, enabling it to deal with the numerous targets and obstacles encountered in human environments.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call