Abstract

We consider a wireless network, consisting of several sensors and a fusion center (FC), that is tasked with solving a binary distributed detection problem. Each sensor is capable of harvesting randomly arrived energy and storing it in a finite-size battery. Modeling the channel fading process as a time-homogeneous finite-state Markov chain and assuming that each sensor knows its current battery state and its quantized channel state information (CSI) obtained by a limited feedback from the FC, our goal is to find the optimal transmit power control policy such that the detection performance metric of interest is maximized. We formulate the problem at hand as a finite-horizon Markov decision process (MDP) problem and obtain the optimal policy via finite-horizon dynamic programming. Our simulations demonstrate that the proposed policy outperforms Greedy-based policy, in which each sensor uses all its available energy for transmission.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call