Abstract

With the development of Unmanned Aerial Vehicles (UAVs) in recent years, UAV-mounted Mobile Edge Computing (MEC) systems are widely used for broadband connectivity and emergency communications in areas without internet. However, the onboard energy of UAVs is limited, and how to provide long-term computing services for ground User Equipments (UEs) still faces significant challenges. In this paper, UAVs are used as mobile base stations to offer MEC services for UEs. Unlike existing solutions that mainly solve UAV path planning from convex optimization methods, we propose a Multi-Agent Path planning (MAP) scheme based on deep reinforcement learning. We aim to minimize the total energy consumption of UAVs while completing the offloading tasks of UEs. At the same time, fairness is taken into account to ensure UE offloading balance and UAV load balance. To this end, we establish the optimization problem and design the state space, action space, and reward function for modeling each UAV through Deep Neural Networks (DNNs). Extensive simulation experiments show that the proposed scheme outperforms other benchmark schemes.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.