Abstract
Computation offloading is one of the primary technological enablers of the Internet of Things (IoT), as it helps address individual devices’ resource restrictions. In the past, offloading would always utilise remote cloud infrastructures, but the increasing size of IoT data traffic and the real-time response requirements of modern and future IoT applications have led to the adoption of the edge computing paradigm, where the data is processed at the edge of the network. The decision as to whether cloud or edge resources will be utilised is typically taken at the design stage based on the type of the IoT device. Yet, the conditions that determine the optimality of this decision, such as the arrival rate, nature and sizes of the tasks, and crucially the real-time condition of the networks involved, keep changing. At the same time, the energy consumption of IoT devices is usually a key requirement, which is affected primarily by the time it takes to complete tasks, whether for the actual computation or for offloading them through the network.Here, we model the expected time and energy costs for the different options of offloading a task to the edge or the cloud, as well as of carrying out on the device itself. We use this model to allow the device to take the offloading decision dynamically as a new task arrives and based on the available information on the network connections and the states of the edge and the cloud. Having extended EdgeCloudSim to provide support for such dynamic decision making, we are able to compare this approach against IoT-first, edge-first, cloud-only, random and application-oriented probabilistic strategies. Our simulations on four different types of IoT applications show that allowing customisation and dynamic offloading decision support can improve drastically the response time of time-critical and small-size applications, and the energy consumption not only of the individual IoT devices but also of the system as a whole. This paves the way for future IoT devices that optimise their application response times, as well as their own energy autonomy and overall energy efficiency, in a decentralised and autonomous manner.
Highlights
As a result of their resource restrictions, Internet of Things (IoT) devices typically rely on the storage, communication, and most significantly, computation resources of remote cloud infrastructures, for example to run computationally intensive artificial intelligence algorithms
IoT applications are becoming increasingly demanding in terms of real-time response requirements, and at the same time, the data they produce is increasing dramatically
We have considered the network for the transmitted data from the IoT device to the target device separately from the the one for the response back, since the input and output average data sizes are different, and the M/M/1 parameters are different
Summary
As a result of their resource restrictions, Internet of Things (IoT) devices typically rely on the storage, communication, and most significantly, computation resources of remote cloud infrastructures, for example to run computationally intensive artificial intelligence algorithms. This traditional IoT-cloud approach has worked well in the first years of IoT, but is unlikely to be able to efficiently meet the requirements of future IoT applications [1, 2]. Pushing the computations and data to the cloud from IoT devices that have limited bandwidth or are connected to the cloud through unreliable networks costs IoT services in terms of response time and availability. Our results show that both response time and energy are improved considerably, at the individual level of each application and at the global system level, as the total energy consumption too is improved
Published Version (
Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have