Abstract

With the increasing demand of optimal performance for applications hosted over cloud environment, fog computing offers an additional layer of benefits by finding the best position for placing the fog nodes to carry out computational processing of data. However, there are challenges associated with this process of fog node placement. Existing review of literature shows that there are open-end problems associated with available approaches with respect to computational efficiency. Hence, this paper proposes a computational framework for fog node placement and resource allocation using potential features of Reinforcement Learning (RL) emphasizing towards bandwidth optimization. The framework is scripted on Python environment, and the proposed scheme uses Q-Learning for performing predictive operation. Markov modelling is carried out towards framing up the model. The outcome of this proposed work shows better dynamic placement of fog nodes for optimal clustering and bandwidth efficiency on its simulation environment.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call