Abstract

This study presents a heterogeneous sensor placement optimization framework using deep reinforcement learning (DRL) that considers system parameter uncertainty. The sensor placement problem is a well-established combinatorial optimization problem characterized by inherent parameter uncertainties that affect system responses that sensors measure. These uncertainties render deterministic solutions insufficient and necessitate a computationally tractable approach to account for the uncertainties. The proposed method incorporates a Markov decision process (MDP) as a stochastic environment, and a sensor placement agent trained using DRL. The agent’s objective is to maximize the effectiveness of sensor placement within a system by selecting sensor types and locations. The agent’s sequential decision-making is guided by a reward function that is designed based on the observability Gramian, calculated using sampled parameter values from an a priori distribution. The proposed approach is validated through simulation of a case study involving heterogeneous sensors in a shear building model with results compared to those from an evolutionary algorithm. The results show that the sensors selected by DRL method match the CMA-ES algorithm with the advantage of having information about relative importance of the selected sensors.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call