Articles published on Statistical model checking
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
185 Search results
Sort by Recency
- Research Article
- 10.3390/computers14120507
- Nov 23, 2025
- Computers
- Libero Nigro
Lamport’s Bakery algorithm (LBA) represents a general and elegant solution to the mutual exclusion (ME) problem posed by Dijkstra in 1965. Its correctness is usually based on intuitive reasoning. LBA rests on an unbounded number of tickets, which prevents correctness assessment by model checking. Several variants are proposed in the literature to bound the number of exploited tickets. This paper is based on a formal method centered on Uppaal for reasoning about general shared-memory ME algorithms. A model can (hopefully) be verified by the exhaustive model checker (MC), and/or by the statistical model checker (SMC) through stochastic simulations. To overcome the scalability problems of SMC, a model can be reduced to actors and simulated in Java. The paper formalizes LBA and demonstrates, through simulations, that it is correct with atomic and non-atomic memory registers. Then, some representative variants with bounded tickets are studied, which prove to be accurate with atomic registers, or which confirm their correctness under atomic or non-atomic registers.
- Research Article
- 10.4204/eptcs.418.2
- May 4, 2025
- Electronic Proceedings in Theoretical Computer Science
- Raghavendran Gunasekaran + 1 more
Verification of Digital Twins using Classical and Statistical Model Checking
- Research Article
1
- 10.1007/s10009-025-00791-4
- Apr 22, 2025
- International Journal on Software Tools for Technology Transfer
- Reza Soltani + 4 more
Systematic spare management is important to optimize the twin goals of high reliability and low costs. However, existing approaches to spare management do not incorporate a detailed analysis of the effect on the absence of spares on the system’s reliability. In this work, we combine fault tree analysis with statistical model checking to model spare part management as a stochastic priced timed game automaton (SPTGA). We use Uppaal Stratego to find the number of spares that minimizes the total costs due to downtime and spare purchasing. The resulting SPTGA model can then additionally be analyzed according to a wide range of other metrics, including expected availability. We apply these techniques to the emergency shutdown system of a research nuclear reactor. In this case study, the failure probability is low, so we change the settings of Uppaal Stratego setting to obtain reliable results about rare events. We consider both a single subsystem and the combination of two subsystems. In both situations, our methods find the optimal number of spares, minimizing cost while ensuring an expected availability of 99.96% and 99.93%, respectively.
- Research Article
- 10.46298/theoretics.25.10
- Apr 1, 2025
- TheoretiCS
- Tomáš Brázdil + 8 more
We present a general framework for applying learning algorithms and heuristical guidance to the verification of Markov decision processes (MDPs). The primary goal of our techniques is to improve performance by avoiding an exhaustive exploration of the state space, instead focussing on particularly relevant areas of the system, guided by heuristics. Our work builds on the previous results of Br{\'{a}}zdil et al., significantly extending it as well as refining several details and fixing errors. The presented framework focuses on probabilistic reachability, which is a core problem in verification, and is instantiated in two distinct scenarios. The first assumes that full knowledge of the MDP is available, in particular precise transition probabilities. It performs a heuristic-driven partial exploration of the model, yielding precise lower and upper bounds on the required probability. The second tackles the case where we may only sample the MDP without knowing the exact transition dynamics. Here, we obtain probabilistic guarantees, again in terms of both the lower and upper bounds, which provides efficient stopping criteria for the approximation. In particular, the latter is an extension of statistical model-checking (SMC) for unbounded properties in MDPs. In contrast to other related approaches, we do not restrict our attention to time-bounded (finite-horizon) or discounted properties, nor assume any particular structural properties of the MDP.Comment: 82 pages. This is the TheoretiCS journal version
- Research Article
- 10.1111/sapm.70014
- Feb 1, 2025
- Studies in Applied Mathematics
- Lu Li + 1 more
ABSTRACTIn this paper, we study the asymptotic behavior of the global solutions to a degenerate forest kinematic model, under the action of a perturbation modeling the impact of climate change. In the case where the main nonlinear term of the model is monotone, we prove that the global solutions converge to a stationary solution, by showing that the Lyapunov function derived from the system satisfies a Łojasiewicz–Simon gradient inequality. We also present an original algorithm, based on the Statistical Model Checking framework, to estimate the probability of convergence toward nonconstant equilibria. Furthermore, under suitable assumptions on the parameters, we prove the continuity of the flow and of the stationary solutions with respect to the perturbation parameter. Then, we succeed in proving the robustness of the weak attractors, by considering a weak topology phase space and establishing the existence of a family of positively invariant regions. At last, we present numerical simulations of the model and explore the behavior of the solutions under the effect of several types of perturbations. We also show that the forest kinematic model can lead to the emergence of chaotic patterns.
- Research Article
- 10.3390/sym17010132
- Jan 17, 2025
- Symmetry
- Yi Zhu + 4 more
As an emerging mode of transportation, autonomous vehicles are increasingly attracting widespread attention. To address the issues of the traditional reinforcement learning algorithm, which only considers discrete actions within the system and cannot ensure the safety of decision-making, this paper proposes a behavior decision-making method based on the deep deterministic policy gradient. Firstly, to enable autonomous vehicles to drive as close to the center of the road as possible while sensitively avoiding surrounding obstacles, the reward function for reinforcement learning is constructed by comprehensively considering road boundaries and nearby vehicles. We account for the symmetry of the road by calculating the distances between the vehicle and both the left and right road boundaries, ensuring that the vehicle remains centered within the road. Secondly, to ensure the safety of decision-making, the safety constraints in autonomous driving scenarios are described using probabilistic computation tree logic, and the scenario is modeled as a stochastic hybrid automaton. Finally, the model is verified by the statistical model checker UPPAAL. The above method enables autonomous vehicles not only to independently acquire driving skills across diverse driving environments but also significantly enhances their obstacle avoidance capabilities, thereby ensuring driving safety.
- Research Article
2
- 10.1016/j.peva.2024.102449
- Nov 2, 2024
- Performance Evaluation
- Mathis Niehage + 1 more
Symbolic state-space exploration meets statistical model checking
- Research Article
- 10.1016/j.jss.2024.112238
- Oct 17, 2024
- The Journal of Systems & Software
- Leonardo Picchiami + 4 more
Scaling up statistical model checking of cyber-physical systems via algorithm ensemble and parallel simulations over HPC infrastructures
- Research Article
3
- 10.1145/3689731
- Oct 8, 2024
- Proceedings of the ACM on Programming Languages
- Seungmin Jeon + 5 more
Probabilistic model checking (PMC) is a verification technique for analyzing the properties of probabilistic systems. However, existing techniques face challenges in verifying large systems with high accuracy. PMC struggles with state explosion, where the number of states grows exponentially with the size of the system, making large system verification infeasible. While statistical model checking (SMC) avoids PMC’s state explosion problem by using a simulation approach, it suffers from runtime explosion, requiring numerous samples for high accuracy. To address these limitations in verifying large systems with high accuracy, we present quantum probabilistic model checking (QPMC), the first method leveraging quantum computing for PMC with respect to time-bounded properties. QPMC addresses state explosion by encoding PMC problems into quantum circuits that superpose states within qubits. Additionally, QPMC resolves runtime explosion through Quantum Amplitude Estimation, efficiently estimating the probabilities of specified properties. We prove that QPMC correctly solves PMC problems and achieves a quadratic speedup in time complexity compared to SMC.
- Research Article
1
- 10.1007/s10703-024-00463-0
- Aug 17, 2024
- Formal Methods in System Design
- Chaitanya Agarwal + 3 more
Abstract Markov decision processes (MDPs) and continuous-time MDP (CTMDPs) are the fundamental models for non-deterministic systems with probabilistic uncertainty. Mean payoff (a.k.a. long-run average reward) is one of the most classic objectives considered in their context. We provide the first practical algorithm to compute mean payoff probably approximately correctly in unknown MDPs. Our algorithm is anytime in the sense that if terminated prematurely, it returns an approximate value with the required confidence. Further, we extend it to unknown CTMDPs. We do not require any knowledge of the state or number of successors of a state, but only a lower bound on the minimum transition probability, which has been advocated in literature. Our algorithm learns the unknown MDP/CTMDP through repeated, directed sampling; thus spending less time on learning components with smaller impact on the mean payoff. In addition to providing probably approximately correct (PAC) bounds for our algorithm, we also demonstrate its practical nature by running experiments on standard benchmarks.
- Research Article
- 10.1177/00375497241264815
- Aug 8, 2024
- SIMULATION
- Yenda Ramesh + 1 more
Statistical model checking (SMC) for the analysis of multi-agent systems has been studied in the recent past. A feature peculiar to multi-agent systems in the context of statistical model checking is that of aggregate queries–temporal logic formula that involves a large number of agents. To answer such queries through Monte Carlo sampling, the statistical approach to model checking simulates the entire agent population and evaluates the query. This makes the simulation overhead significantly higher than the query evaluation overhead. This problem becomes particularly challenging when the model checking queries involve multiple attributes of the agents. To alleviate this problem, we propose a population sampling algorithm that simulates only a subset of all the agents and scales to multiple attributes, thus making the solution generic. The population sampling approach results in increased efficiency (a gain in running time of 50%–100%) for a marginal loss in accuracy (between 1% and 5%) when compared with the exhaustive approach (which simulates the entire agent population to evaluate the query), especially for queries that involve limited time horizons. Finally, we report parallel versions of the above algorithms. We explore different strategies of core allocation, both for exhaustive simulations of all agents and the sampling approach. This yields further gains in running time, as expected. The parallel approach, when combined with the sampling idea, results in improving the efficiency (a gain in running time of 100%–150%) with a minor loss when compared with the exhaustive approach in accuracy (between 1% and 5%).
- Research Article
1
- 10.1007/s10270-024-01195-9
- Aug 5, 2024
- Software and Systems Modeling
- Hendrik Göttmann + 5 more
Abstract In many recent application domains, software systems must repeatedly reconfigure themselves at runtime to satisfy changing contextual requirements. To decide which next configuration is presumably best suited is a very challenging task as it involves not only functional requirements but also non-functional properties (NFP). NFP include multiple, potentially contradicting, criteria like real-time constraints and cost measures like energy consumption. Effectiveness of context-aware reconfiguration decisions further depends on mostly uncertain future contexts which makes greedy one-step decision heuristics potentially misleading. Moreover, the computational runtime overhead for reconfiguration planning should not nullify the benefits. Nevertheless, entirely pre-planning reconfiguration decisions during design time is also not feasible due to missing knowledge about runtime contexts. In this article, we propose a model-based technique for precomputing context-aware reconfiguration decisions under partially uncertain real-time constraints and cost measures. We employ a game-theoretic approach based on stochastic priced timed game automata as reconfiguration model. This formal model allows us to automatically synthesize winning strategies for the first player (the system) which efficiently delivers presumably best-fitting reconfiguration decisions as reactions to moves of the second player (the context) at runtime. Our tool implementation copes with the high computational complexity of strategy synthesis by utilizing the statistical model checker Uppaal Stratego to approximate near-optimal solutions. We applied our tool to a real-world example consisting of a reconfigurable robot support system for the construction of aircraft fuselages. Our evaluation results show that Uppaal Stratego is indeed able to precompute effective reconfiguration strategies within a reasonable amount of time.
- Research Article
- 10.1016/j.ecolmodel.2024.110812
- Jul 30, 2024
- Ecological Modelling
- Gilles Ardourel + 5 more
Computational assessment of Amazon forest plots regrowth capacity under strong spatial variability for simulating logging scenarios
- Research Article
- 10.1145/3649438
- Jul 10, 2024
- ACM Transactions on Modeling and Computer Simulation
- David Julien + 3 more
We propose a simulation-based technique for the parameterization and the stability analysis of parametric Ordinary Differential Equations. This technique is an adaptation of Statistical Model Checking, often used to verify the validity of biological models, to the setting of Ordinary Differential Equations systems. The aim of our technique is to estimate the probability of satisfying a given property under the variability of the parameter or initial condition of the ODE, with any metrics of choice. To do so, we discretize the values space and use statistical model checking to evaluate each individual value w.r.t. provided data. Contrary to other existing methods, we provide statistical guarantees regarding our results that take into account the unavoidable approximation errors introduced through the numerical integration of the ODE system performed while simulating. In order to show the potential of our technique, we present its application to two case studies taken from the literature, one relative to the growth of a jellyfish population, and the other concerning a well-known oscillator model.
- Research Article
4
- 10.3390/modelling5030037
- Jun 28, 2024
- Modelling
- Libero Nigro + 1 more
Mutual exclusion algorithms are at the heart of concurrent/parallel and distributed systems. It is well known that such algorithms are very difficult to analyze, and in the literature, different conjectures about starvation freedom and the number of by-passes (also called the overtaking factor) exist. The overtaking factor affects the (hopefully) bounded waiting time that a process competing for entering the critical section has to suffer before accessing the shared resource. This paper proposes a novel modeling approach based on Timed Automata and the Uppaal toolset, which proves effective for studying all the properties of a mutual exclusion algorithm for N≥2 processes, by exhaustive model checking. Although the approach, as already confirmed by similar experiments reported in the literature, is not scalable due to state explosion problems and can be practically applied until N≤5, it is of great value for revealing the true properties of analyzed algorithms. For dimensions N>5, the Statistical Model Checker of Uppaal can be used, which, although based on simulations, can confirm properties by estimations and probabilities. This paper describes the proposed modeling and verification method and applies it to several mutual exclusion algorithms, thus retrieving known properties but also showing new results about properties often studied by informal reasoning.
- Research Article
5
- 10.1145/3635160
- Apr 30, 2024
- ACM Transactions on Cyber-Physical Systems
- Xin Qin + 4 more
Uncertainty in safety-critical cyber-physical systems can be modeled using a finite number of parameters or parameterized input signals. Given a system specification in Signal Temporal Logic (STL), we would like to verify that for all (infinite) values of the model parameters/input signals, the system satisfies its specification. Unfortunately, this problem is undecidable in general. Statistical model checking (SMC) offers a solution by providing guarantees on the correctness of CPS models by statistically reasoning on model simulations. We propose a new approach for statistical verification of CPS models for user-provided distribution on the model parameters. Our technique uses model simulations to learn surrogate models , and uses conformal inference to provide probabilistic guarantees on the satisfaction of a given STL property. Additionally, we can provide prediction intervals containing the quantitative satisfaction values of the given STL property for any user-specified confidence level. We compare this prediction interval with the interval we get using risk estimation procedures. We also propose a refinement procedure based on Gaussian Process (GP)-based surrogate models for obtaining fine-grained probabilistic guarantees over sub-regions in the parameter space. This in turn enables the CPS designer to choose assured validity domains in the parameter space for safety-critical applications. Finally, we demonstrate the efficacy of our technique on several CPS models.
- Research Article
6
- 10.3390/math12060812
- Mar 10, 2024
- Mathematics
- Libero Nigro + 1 more
Modeling and verification of the correct behavior of embedded real-time systems with strict timing constraints is a well-known and important problem. Failing to fulfill a deadline in system operation can have severe consequences in the practical case. This paper proposes an approach to formal modeling and schedulability analysis. A novel extension of Petri Nets named Constraint Time Petri Nets (C-TPN) is developed, which enables the modeling of a collection of interdependent real-time tasks whose execution is constrained by the use of priority and shared resources like processors and memory data. A C-TPN model is reduced to a network of Timed Automata in the context of the popular Uppaal toolbox. Both functional and, most importantly, temporal properties can be assessed by exhaustive model checking and/or statistical model checking based on simulations. This paper first describes and motivates the proposed C-TPN modeling language and its formal semantics. Then, a Uppaal translation is shown. Finally, three models of embedded real-time systems are considered, and their properties are thoroughly verified.
- Research Article
7
- 10.1016/j.jss.2024.111983
- Jan 23, 2024
- Journal of Systems and Software
- Roberto Casaluce + 4 more
We propose a novel methodology to validate software product line (PL) models by integrating Statistical Model Checking (SMC) with Process Mining (PM). We consider the feature-oriented language QFLan from the PL engineering domain. QFLan allows to model PL equipped with rich cross-tree and quantitative constraints, as well as aspects of dynamic PLs such as the staged configurations. This richness allows us to easily obtain models with infinite state-space, calling for simulation-based analysis techniques, like SMC. For example, we use a running example with infinite state space. SMC is a family of analysis techniques based on the generation of samples of the dynamics of a system. SMC aims at estimating properties of a system like the probability of a given event (e.g., installing a feature), or the expected value of quantities in it (e.g., the average price of products from the studied family). Instead, PM is a family of data-driven techniques that uses logs collected on the execution of an information system to identify and reason about its underlying execution process. This often regards identifying and reasoning about process patterns, bottlenecks, and possibilities for improvement. In this paper, to the best of our knowledge, we propose, for the first time, the application of Process Mining (PM) techniques to the byproducts of Statistical Model Checking (SMC) simulations. This aims to enhance the utility of SMC analyses. Typically, if SMC gives unexpected results, the modeler has to discover whether these come from actual characteristics of the system, or from bugs in the model. This is done in a black-box manner, only based on the obtained numerical values. We improve on this by using PM to get a white-box perspective on the dynamics of the system observed by SMC. Roughly speaking, we feed the samples generated by SMC to PM tools, obtaining a compact graphical representation of the observed dynamics. This mined PM model is then transformed into a mined QFLan model, making it accessible to PL engineers. Using two well-known PL models, we show that our methodology is effective (helps in pinpointing issues in models, and in suggesting fixes), and that it scales to complex models. We also show that it is general, by applying it to the security domain.
- Research Article
2
- 10.3390/fi15120378
- Nov 26, 2023
- Future Internet
- Fawad Ali Mangi + 2 more
The study of business process analysis and optimisation has attracted significant scholarly interest in the recent past, due to its integral role in boosting organisational performance. A specific area of focus within this broader research field is process mining (PM). Its purpose is to extract knowledge and insights from event logs maintained by information systems, thereby discovering process models and identifying process-related issues. On the other hand, statistical model checking (SMC) is a verification technique used to analyse and validate properties of stochastic systems that employs statistical methods and random sampling to estimate the likelihood of a property being satisfied. In a seamless business setting, it is essential to validate and verify process models. The objective of this paper is to apply the SMC technique in process mining for the verification and validation of process models with stochastic behaviour and large state space, where probabilistic model checking is not feasible. We propose a novel methodology in this research direction that integrates SMC and PM by formally modelling discovered and replayed process models and apply statistical methods to estimate the results. The methodology facilitates an automated and proficient evaluation of the extent to which a process model aligns with user requirements and assists in selecting the optimal model. We demonstrate the effectiveness of our methodology with a case study of a loan application process performed in a financial institution that deals with loan applications submitted by customers. The case study highlights our methodology’s capability to identify the performance constraints of various process models and aid enhancement efforts.
- Research Article
4
- 10.1145/3607198
- Oct 26, 2023
- ACM Transactions on Modeling and Computer Simulation
- Timo P Gros + 8 more
Neural networks (NN) are gaining importance in sequential decision-making. Deep reinforcement learning (DRL), in particular, is extremely successful in learning action policies in complex and dynamic environments. Despite this success, however, DRL technology is not without its failures, especially in safety-critical applications: (i) the training objective maximizes average rewards, which may disregard rare but critical situations and hence lack local robustness; (ii) optimization objectives targeting safety typically yield degenerated reward structures, which, for DRL to work, must be replaced with proxy objectives. Here, we introduce a methodology that can help to address both deficiencies. We incorporate evaluation stages (ES) into DRL, leveraging recent work on deep statistical model checking (DSMC), which verifies NN policies in Markov decision processes. Our ES apply DSMC at regular intervals to determine state space regions with weak performance. We adapt the subsequent DRL training priorities based on the outcome, (i) focusing DRL on critical situations and (ii) allowing to foster arbitrary objectives. We run case studies on two benchmarks. One of them is the Racetrack, an abstraction of autonomous driving that requires navigating a map without crashing into a wall. The other is MiniGrid, a widely used benchmark in the AI community. Our results show that DSMC-based ES can significantly improve both (i) and (ii).