Abstract

The privacy issue has become one of the most critical concerns in cyber-physical systems (CPSs) as CPSs are vulnerable to information leakage. In particular, a passive intruder could infer the secret information of the system through observations, and the system may be critically compromised or damaged once the intruder has high confidence on certain secret states. In this paper, we investigate the planning problem of a stochastic system in the presence of a passive eavesdropping intruder. In this system, the planner is modeled as a Markov decision process (MDP) who can access the state information and control the system transition. The intruder, who has a partial observation of the system state, is modeled as a hidden Markov model. The goal of the intruder is to infer the secrets of the system in terms of whether the current system is in some sensitive states, and the goal of the defender is to maximize the reward while preventing the intruder from inferring the secret. Distinct from existing work that embedded privacy as a part of the reward or utility function, we quantify privacy as a constraint for the planning. The problem is formulated as a constrained partially observable MDP (POMDP) planning problem and a belief state partition approach is proposed to solve the privacy-preserving planning problem via value iteration. Our observation is that the defender could prevent the intruder from inferring sensitive information via belief manipulation. However, the introduction of the privacy concern may sacrifice the system performance or even cause the problem to be infeasible. A necessary and sufficient condition is given to check the feasibility of the planning problem and examples are shown to illustrate our proposed algorithm.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call