Abstract
We consider a wireless network, consisting of several sensors and a fusion center (FC), that is tasked with solving a binary distributed detection problem. Each sensor is capable of harvesting randomly arrived energy and storing it in a finite-size battery. Modeling the channel fading process as a time-homogeneous finite-state Markov chain and assuming that each sensor knows its current battery state and its quantized channel state information (CSI) obtained by a limited feedback from the FC, our goal is to find the optimal transmit power control policy such that the detection performance metric of interest is maximized. We formulate the problem at hand as a finite-horizon Markov decision process (MDP) problem and obtain the optimal policy via finite-horizon dynamic programming. Our simulations demonstrate that the proposed policy outperforms Greedy-based policy, in which each sensor uses all its available energy for transmission.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.