Abstract

Mobile edge computing is a new distributed computing paradigm which brings computation and data storage closer to the location where it is needed, to improve response times and save bandwidth in the dynamic mobile networking environment. Despite the improvements in network technology, data centers cannot always guarantee acceptable transfer rates and response times, which could be a critical requirement for many applications. The aim of mobile edge computing is to move the computation away from data centers towards the edge of the network, exploiting smart objects, mobile phones or network gateways to perform tasks and provide services on behalf of the cloud. In this paper, we design a task offloading scheme in the mobile edge network to handle the task distribution, offloading and management by applying deep reinforcement learning. Specifically, we formulate the task offloading problem as a multi-agent reinforcement learning problem. The decision-making process of each agent is modeled as a Markov decision process and deep Q-learning approach is applied to deal with the large scale of states and actions. To evaluate the performance of our proposed scheme, we develop a simulation environment for the mobile edge computing scenario. Our preliminary evaluation results with a simplified multi-armed bandit model indicate that our proposed solution can provide lower latency for the computational intensive tasks in mobile edge network, and outperforms than naive task offloading method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.