Abstract

With the rapid development of Internet of Things (IoT) techniques, IoT devices with sensors have been widely deployed and used in smart buildings and environment, and the application scenarios of IoT in smart buildings and environment have been further extended. However, due to the limitations of computation, storage and battery capacity, IoT devices cannot process all the tasks locally by themselves and need to offload some tasks to edge servers typically deployed in base stations (BSs). Besides, unmanned aerial vehicles (UAVs) with controllable mobility and flexibility have been recognized as a promising solution to assist communication in emergency scenarios. In this paper, we investigate UAV-assisted offloading for IoT in smart buildings and environment. We formulate the offloading problem with the goal of minimizing the long-term energy consumption and minimizing the queue length at the same time. As the solution space size is extremely large and the offloading problem focuses on the long-term optimization goal, solving this problem faces several challenges. To address these challenges, we reformulate it as a Markov decision process (MDP)-based offloading problem and propose the UAV-assisted task offloading (UTO) approach based on deep reinforcement learning (DRL) techniques. Our UTO approach can cope well with the challenges brought by high-dimensional and consecutive state and action space. We carry out a series of comparison experiments with both DRL and non-DRL algorithms, and the results validate the performance of our proposed UTO approach.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call