Abstract
The cloud-users are getting impatient by experiencing the delays in loading the content of the web applications over the internet, which is usually caused by the complex latency while accessing the cloud datacenters distant from the cloud-users. It is becoming a catastrophic situation in availing the services and applications over the cloud-centric network. In cloud, workload is distributed across the multiple layers which also increases the latency. Time-sensitive Internet of Things (IoT) applications and services, usually in a cloud platform, are running over various virtual machines (VM’s) and possess high complexities while interacting. They face difficulties in the consolidations of the various applications containing heterogenetic workloads. Fog computing takes the cloud computing services to the edge-network, where computation, communication and storage are within the proximity to the end-user’s edge devices. Thus, it utilizes the maximum network bandwidth, enriches the mobility, and lowers the latency. It is a futuristic, convenient and more reliable platform to overcome the cloud computing issues. In this manuscript, we propose a Fog-based Spider Web Algorithm (FSWA), a heuristic approach which reduces the delays time (DT) and enhances the response time (RT) during the workflow among the various edge nodes across the fog network. The main purpose is to trace and locate the nearest f-node for computation and to reduce the latency across the various nodes in a network. Reduction of latency will enhance the quality of service (QoS) parameters, smooth resource distribution, and services availability. Latency can be an important factor for resource optimization issues in distributed computing environments. In comparison to the cloud computing, the latency in fog computing is much improved.
Highlights
Cloud computing is a network-based computing paradigm that allows the cloud-users to access the computing resources anytime, anywhere, and on the go
The workload distribution is not quite often smooth because the congestion over the core-cloud network increases the complexity in cloud latency from the various participating computing nodes over the network [2]
Fog computing is an ideal platform for services and applications running over the billions of connected edge-devices to the Internet of Things (IoT) environment
Summary
Cloud computing is a network-based computing paradigm that allows the cloud-users to access the computing resources anytime, anywhere, and on the go. The workload distribution is not quite often smooth because the congestion over the core-cloud network increases the complexity in cloud latency from the various participating computing nodes over the network [2]. Fog computing is an ideal platform for services and applications running over the billions of connected edge-devices to the Internet of Things (IoT) environment. It offers the same computational tasks (computing, storage, and networking) as those of the cloud but it provides greater intent and proximity as computing takes place nearer to the user's edge devices [3]. The workloads are temporally offloaded to the fog nano data centres to reduce the network congestions and bottleneck situations arising in a network during data communication and transformation
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.