Abstract

The Markov Decision Process (MDP) formalism is a well-known mathematical formalism to study systems with unknown scheduling mechanisms or with transitions whose next-state probability distribution is not known with precision. Analysis methods for MDPs are based generally on the identification of the strategies that maximize (or minimize) a target function based on the MDP’s rewards (or costs). Alternatively, formal languages can be defined to express quantitative properties that we want to be ensured by an MDP, including those which extend classical temporal logics with probabilistic operators. Dario Bruneo and Salvatore Distefano (eds.) Quantitative Assessments of Distributed Systems, (3–26) 2015 © Scrivener Publishing LLC 3 4 Quantitative Assessments of Distributed Systems The MDP formalism is low level: to facilitate the representation of complex reallife distributed systems higher-level languages have been proposed. In this chapter we consider Markov Decision Well-formed Nets (MDWN), which are probabilistic extensions of Petri nets that allow one to describe complex nondeterministic (probabilistic) behavior as a composition of simpler nondeterministic (probabilistic) steps, and which inherit the efficient analysis algorithms originally devised for well-formed Petri nets. The features of the formalism and the type of properties that can be studied are illustrated by an example of a peer-to-peer illegal botnet.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.