Abstract

We investigate sensor scheduling for remote estimation when multiple smart sensors monitor multiple stochastic dynamical systems. The sensors transmit their measurements to a remote estimator through a noisy wireless communication channel. Such a remote estimator can receive multiple packets simultaneously sent by local sensors. Sensors transmit their measurements if their Signal Interference and Noise Ratio (SINR) is above a threshold. We compute the optimal policy for sensor scheduling by minimizing expected error covariance subject to total signal transmissions from all sensors. We model this problem as Markov Decision Process (MDP) with discounted cost per stage in the finite time horizon framework, then we employ stochastic Dynamic Programming as the optimization method. A novel algorithm based on sampling and machine learning techniques is proposed as the approximation. At each phase of the DP algorithm, samples are collected using a uniform probability distribution. The data is used to feed Neural Network (NN) and Random Forest (RF) models for cost function and policy approximation. The results of the proposed framework are supported by simulation examples comparing RF and NN as Approximate DP (ADP). Note that this idea builds a bridge among the recent advances in the area of data science, Machine Learning, and the ADP.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call