Abstract

Motivated by security and privacy considerations in applications of discrete event systems, various notions of opacity have been introduced. Specifically, a system is said to be current-state opaque if the entrance of the system state to a set of secret states remains opaque (uncertain) to an intruder - at least until the system leaves the set of secret states. This notion, which has been studied in non-deterministic finite automaton settings where the intruder observes a subset of events, has been shown to be useful in characterizing security requirements in many applications (including encryption using pseudo-random generators and trajectory coverage of a mobile agent in sensor networks). One limitation of these existing approaches is that they fail to provide a quantifiable measure for characterizing the degree of opacity of a given system. In this paper, we partially address this limitation by extending this framework to systems that can be modeled as probabilistic finite automata, characterizing in the process the probability of observing a violation of current-state opacity. We introduce the notion of step-based almost current-state opacity which provides a measure of opacity for a given system. We also propose a verification method for this probabilistic notion of opacity and characterize its computational complexity.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call