Abstract

Ambient energy harvesting for Wireless Sensor Networks (WSNs) is being pitched as a promising solution for long-lasting deployments in various WSN applications. However, the sensor nodes most often do not have enough energy to handle application, network and house-keeping tasks because amount of energy harvested highly varies spatially and temporally. Moreover the ambient source cannot be assumed to be continuously available. When harvested energy is in excess, it is desirable that the nodes take up higher loads. The nodes should switch to highly energy efficient schemes when the energy is not sufficient. Hence harvesting-aware scheduling of tasks is required. The two most important challenges for harvesting-aware scheduling are (a) to determine the amount of energy to be expended in a time slot, and (b) to utilize this energy for execution of tasks maximally. To increase energy utilization for task execution, we decompose application level tasks into subtasks, some of which can be executed concurrently. In this article, we propose a dynamic optimization model, based on Markov Decision Process (MDP) that takes into account priorities and deadlines of the tasks, and stored and harvested energy to derive an optimal scheduling policy. Since the complexity of the MDP is intractable in realtime, we propose a greedy scheduling policy. We compare its performance with the optimal policy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call