Abstract

Fog computing has emerged as an extension to the existing cloud infrastructure for providing latency-aware and highly scalable services to geographically distributed end devices. The addition of the fog layer in the cloud computing paradigm helps to improve the quality of service (QoS) in time-critical and delay-sensitive applications. Due to the continuous increase in the deployment of fog networks at large scale, energy efficiency is a significant issue in the fog computing paradigm to reduce the service cost and to protect the environment. A plethora of research has been conducted to reduce energy consumption in fog computing, majorly, focusing on the scheduling of incoming jobs to improve energy efficiency. However, node-level mechanisms have largely been neglected. Cache placement is a critical issue in fog networks for efficient content distribution to clients, which requires simultaneous consideration of many factors including quality of network connection, the demand for contents, and users’ activities. In this paper, a popularity-based caching mechanism in content delivery fog networks is proposed. In this context, two energy-aware mechanisms, i.e., content filtration and load balancing, have been applied. In the proposed approach, popular contents are found using random distribution and these contents are categorized into three classes. After finding the file popularity, an active fog node is selected based on the number of neighbors, energy level, and operational power. Further, the popular content is cached on the active node using a filtration mechanism. Moreover, a load-balancing algorithm is proposed to increase the overall system efficiency in the cached fog network. The evaluation of the proposed approach exhibits promising results in terms of energy consumption and latency. The proposed scheme consumes 92.6% and 82.7% less energy in comparison to without caching and simple caching mechanisms, respectively. Similarly, an improvement of 85.29% and 67.4% in delay has also been noticed while using advance caching against the without caching and simple caching techniques, respectively.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call