Abstract
Optimal sensor placement is critical for enhancing the effectiveness of monitoring dynamical systems. Deterministic solutions do not reflect the effects of input and parameter uncertainty on the sensor placement. Using a Markov decision process (MDP) and a sensor placement agent, this study proposes a stochastic approach to maximize the gain from placing a fixed number of sensors within the system. Utilizing Deep Reinforcement Learning (DRL), the agent is trained by collecting interactive samples within the environment, which uses an information-theoretic reward function that is a measure, based on Shannon entropy, of the identifiability of the model parameters. The goal of the agent is to maximize its expected future reward by selecting, at each step, the action (placing a sensor) that provides the most information. This framework is validated using a synthetic model of a base-isolated structure. To consider the existing uncertainty in the parameters, a prior probability distribution is chosen (e.g., based on expert judgement or preliminary study) for each model parameter. Further, a probabilistic model for the input is used to reflect input variability. In a Deep Q-network, a type of DRL algorithm, the agent learns a mapping from states (i.e., sensor configurations) to the ”quality” of each action at that state, called ”Q-values”. This network is trained using samples of state, action, and reward by interacting with the environment. The modular property of the framework and the function approximation used in this study makes it scalable to complex real-world applications of sensor placement problems in the presence of uncertainties.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.