Abstract

A system is said to be current-state opaque if the entrance of the system state to a set of secret states remains opaque (uncertain) to an intruder-at least until the system leaves the set of secret states. This notion of opacity has been studied in nondeterministic finite automata settings (where the intruder observes a subset of events, for example, via some natural projection mapping) and has been shown to be useful in characterizing security requirements in many applications (including encryption using pseudorandom generators and coverage properties in sensor networks). One limitation of the majority of existing analysis is that it fails to provide a quantifiable measure of opacity for a given system; instead, it simply provides a binary characterization of the system (being opaque or not opaque). In this paper, we address this limitation by extending current-state opacity formulations to systems that can be modeled as probabilistic finite automata under partial observation. We introduce three notions of opacity, namely: 1) step-based almost current-state opacity; 2) almost current-state opacity; and 3) probabilistic current-state opacity, all of which can be used to provide a measure of a given system's opacity. We also propose verification methods for these probabilistic notions of opacity and characterize their corresponding computational complexities.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call