Abstract

Under the blessing of the new generation of information technology represented by 5G, a large number of new models and new businesses represented by smart logistics, industrial Internet, and smart transportation have emerged one after another, and the door to the intelligent interconnection of all things has officially opened. However, due to IoT sensor devices are usually responsible for data acquisition and transmission, they have certain limitations in terms of computing and storage capabilities. How to expand the performance of devices located at the edge of the network has become a focus of attention. Aiming at the problems of high energy consumption, prolonged time and high task failure rate in traditional IoT edge computing methods, this paper introduces deep reinforcement learning technology to optimize IoT edge computing offload methods. This paper models a single edge server multi-user scenario, and designs a function that comprehensively considers task delay and task failure rate as the goal of further optimization. At the same time, aiming at the problem of state space dimension explosion in traditional reinforcement learning, a computing task offloading method based on deep Q network is further proposed. Through simulation experiments, the results show that the proposed method has certain advantages in time delay and task success rate under the condition of different number of IoT devices.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call