Abstract

In Internet of Things (IoT), the things collect, relay information, and processes the information collectively and take self-automated actions. With growing complexities in the IoT domain and its architectures, a convergence of computation and communication technologies is becoming a key challenge to meet the stringent demands of advanced IoT use cases. An exponential increase in IoT devices and stringent communication and network constraints such as latency and bandwidth from advanced IoT use cases, including autonomous vehicles, eHealth, and smart grid, make it challenging to use the current IoT infrastructure. Further, most IoT architectures proposed so far focus on a single IoT use case, with one dimension on communication and the other on computation architectures. However, in emerging IoT networks, all computation layers and multiple IoT use cases must be supported in a single IoT architecture to tackle the exponential growth in IoT applications cost-effectively and energy-efficiently. To address this challenge, we propose an optimal node selection framework that considers all three computation layers (edge, fog, and cloud) for load balancing and optimizing resource allocations in an IoT architecture. The proposed approach is evaluated through simulation results. The results provide an insight into how the proposed framework can be used to allocate the best suitable node in the IoT architecture and process the requests whilst using a minimal number of nodes in the architecture and satisfying the network and application requirements.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call