Abstract

In the fog computing paradigm, if the computing resources of an end device are insufficient, the user’s tasks can be offloaded to nearby devices or the central cloud. In addition, due to the limited energy of mobile devices, optimal offloading is crucial. The method presented in this paper is based on the auction theory, which has been used in recent studies to optimize computation offloading. We propose a bid prediction mechanism using Q-learning. Nodes participating in the auction announce a bid value to the auctioneer entity, and the node with the highest bid value is the auction winner. Then, only the winning node has the right to offload the tasks on its upstream (parent) node. The main idea behind Q-learning is that it is stateless and only considers the current state to perform an action. The evaluation results show that the bid values predicted by the Q-learning method are near-optimal. On average, the proposed method consumes less energy than traditional and state-of-the-art techniques. Also, it reduces the execution time of tasks and leads to less consumption of network resources.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call