Published in last 50 years
Articles published on Formal Verification
- New
- Research Article
- 10.1002/spy2.70131
- Nov 1, 2025
- SECURITY AND PRIVACY
- Suprith Kumar K S + 3 more
ABSTRACT Lightweight and secure authentication is a fundamental requirement for mobile roaming in edge‐assisted networks, particularly in the presence of resource constraints and the emerging threat of quantum‐capable adversaries. This paper proposes a blockchain‐assisted authentication protocol that employs post‐quantum cryptographic primitives to generate and validate device‐bound tokens. During registration, a Home Agent (HA) issues blockchain‐anchored tokens containing signed security metadata and a freshness counter to prevent replay attacks. In roaming scenarios, the Mobile User (MU) selectively discloses token metadata to the Foreign Agent (FA), which verifies its authenticity with the HA to enable efficient and trustworthy authentication. A hybrid key establishment using post‐quantum key encapsulation ensures forward secrecy and quantum‐resistant confidentiality. Formal verification through BAN logic reasoning and automated analysis using the Scyther tool confirm that the protocol withstands impersonation, replay, and man‐in‐the‐middle attacks. Experimental evaluation on mobile devices demonstrates low computational and communication overhead, showing that the protocol is practical and well‐suited for real‐world deployment in edge‐assisted mobility environments.
- New
- Research Article
- 10.1371/journal.pone.0332943
- Oct 27, 2025
- PLOS One
- Zhanfei Ma + 5 more
Authentication is a crucial challenge for Internet of Things (IoT) security, especially in open, distributed and resource-constrained environments. Current methods have significant shortcomings in terms of efficiency, adaptability, and ability to cope with complicated security threats. Therefore, this paper proposes a lightweight authentication framework for Cloud-Edge-End, which integrates the enhanced Fast Authentication and Signature Trust for SM9 (FAST-SM9) algorithm and zero-trust Dynamic Re-authentication (zero-trust-DRA) mechanism. First, FAST-SM9 effectively reduces protocol overhead, and meanwhile ensuring security by organically integrating authentication and signature processes. Its architectural optimization reduces the number of communication rounds by 40% and simplifies trust negotiation between heterogeneous layers without affecting the integrity of encryption mechanisms. To enhance runtime protection, the designed zero-trust-DRA mechanism also introduces context-aware, time-windowed based re-authentication techniques so as to efficiently defend against risks such as session hijacking and credential leakage. In addition, the Dynamic Identity Token Generation Mechanism (DITGM) enhances the security and flexibility of the system by incorporating multi-factor attributes such as fingerprints and OTP seeds into time-sensitive tokens. Experimental results show that this scheme reduces latency by 56.6% and energy consumption by 63% compared to traditional PKI edge authentication methods, and effectively resists related attacks. The formal tool AVISPA verification further confirms its security. The scalability testing also proves its applicability in IoT. A feasible path is provided for efficient and secure identity authentication in distributed systems, which helps to promote the development of zero-trust security systems.
- New
- Research Article
- 10.3390/electronics14214164
- Oct 24, 2025
- Electronics
- Seungbin Lee + 3 more
The Internet of Medical Things (IoMT) comprises the application of traditional Internet of Things (IoT) technologies in the healthcare domain. IoMT ensures seamless data-sharing among hospitals, patients, and healthcare service providers, thereby transforming the medical environment. The adoption of IoMT technology has made it possible to provide various medical services such as chronic disease care, emergency response, and preventive treatment. However, the sensitivity of medical data and the resource limitations of IoMT devices present persistent challenges in designing authentication protocols. Our study reviews the overall architecture of the IoMT and recent studies on IoMT protocols in terms of security requirements and computational costs. In addition, this study evaluates security using formal verification tools with Scyther and SVO Logic. The security requirements include authentication, mutual authentication, confidentiality, integrity, untraceability, privacy preservation, anonymity, multi-factor authentication, session key security, forward and backward secrecy, and lightweight operation. The analysis shows that protocols satisfying a multiple security requirements tend to have higher computational costs, whereas protocols with lower computational costs often provide weaker security. This demonstrates the trade-off relationship between robust security and lightweight operation. These indicators assist in selecting protocols by balancing the allocated resources and required security for each scenario. Based on the comparative analysis and a security evaluation of the IoMT, this paper provides security guidelines for future research. Moreover, it summarizes the minimum security requirements and offers insights that practitioners can utilize in real-world settings.
- New
- Research Article
- 10.3390/s25206428
- Oct 17, 2025
- Sensors (Basel, Switzerland)
- Yubao Liu + 3 more
Computational task offloading is a key technology in the field of vehicle-to-everything (V2X) communication, where security issues represent a core challenge throughout the offloading process. We must ensure the legitimacy of both the offloading entity (requesting vehicle) and the offloader (edge server or assisting vehicle), as well as the confidentiality and integrity of task data during transmission and processing. To this end, we propose a security authentication scheme for the V2X computational task offloading environment. We conducted rigorous formal and informal analyses of the scheme, supplemented by verification using the formal security verification tool AVISPA. This demonstrates that the proposed scheme possesses fundamental security properties in the V2X environment, capable of resisting various threats and attacks. Furthermore, compared to other related authentication schemes, our proposed solution exhibits favorable performance in terms of computational and communication overhead. Finally, we conducted network simulations using NS-3 to evaluate the scheme’s performance at the network layer. Overall, the proposed scheme provides reliable and scalable security guarantees tailored to the requirements of computing task offloading in V2X environments.
- New
- Research Article
- 10.1007/s12083-025-02141-2
- Oct 15, 2025
- Peer-to-Peer Networking and Applications
- Sudip Kumar Palit + 2 more
Design, formal verification, and security analysis of SCMChain: A lightweight blockchain-based authentication protocol for IoT-enabled supply chain management
- Research Article
- 10.1097/mao.0000000000004638
- Oct 9, 2025
- Otology & neurotology : official publication of the American Otological Society, American Neurotology Society [and] European Academy of Otology and Neurotology
- Guy Fierens + 4 more
This study aimed to evaluate the interactions of multiple active hearing implants in the 7T magnetic resonance (MR) environment by assessing interactions occurring between the implantable device and the MR environment. One cochlear implant and 2 bone conduction implant models were used in the study. The use of MR techniques in patients with active hearing implants has become a daily practice at 1.5 and 3T. Scanners using field strengths of 7T are becoming more widely available and are likely to be associated with even greater patient risk. Six potential interactions were investigated: magnetically induced force and torque, retaining magnet magnetization, device functionality, device heating, and image artifacts. Device functionality was verified after 10 exposures. When no magnet was present, the force ratio, defined as the magnetically induced force divided by the force induced by gravity, remained below 0.3 for all devices. With the magnet in place, the force ratio increased to 11. Average magnetization changes measured were similar to the population spread at baseline. For all devices, heating did not exceed 0.35°C compared with background heating after 15 minutes of consecutive scanning at 3.2W/kg or with a gradient field strength of 41.8T/s. The findings show no adverse effects or performance degradation of the implant within the predefined test conditions. Preliminary outcomes of this feasibility study are positive, yet do not imply implant safety in the 7T MR environment. Formal verification will be required to label a device safe at this field strength.
- Research Article
- 10.1145/3763084
- Oct 9, 2025
- Proceedings of the ACM on Programming Languages
- Joonwon Choi + 2 more
In formal hardware verification, particularly for Register-Transfer Level (RTL) designs in Verilog, model checking has been the predominant technique. However, it suffers from state explosion, limited expressive power, and a large trusted computing base (TCB). Deductive verification offers greater expressive power and enables foundational verification with a minimal TCB. Nevertheless, Verilog's standard semantics, characterized by its nondeterministic and global scheduling, pose significant challenges to its application. To address these challenges, we propose a new Verilog semantics designed to facilitate deductive verification. Our semantics is based on least fixpoints to enable cycle-level functional evaluation and modular reasoning. For foundational verification, we prove our semantics equivalent to the standard scheduling semantics for synthesizable designs. We demonstrate the benefits of our semantics with a modular verification of a pipelined RISC-V processor's functional correctness and progress guarantees. All our results are mechanized in Rocq.
- Research Article
- 10.1145/3763051
- Oct 9, 2025
- Proceedings of the ACM on Programming Languages
- Lang Liu + 4 more
Given the high cost of formal verification, a large system may include differently analyzed components: a few are fully verified, and the rest are tested. Currently, there is no reasoning system that can soundly compose these heterogeneous analyses and derive the overall formal guarantees of the entire system. The traditional compositional reasoning technique—rely-guarantee reasoning—is effective for verified components, which undergo over-approximated reasoning, but not for those components that undergo under-approximated reasoning, e.g., using testing or other program analysis techniques. The goal of this paper is to develop a formal, logical foundation for composing heterogeneous analysis, deploying both over-approximated (verification) and under-approximated (testing) reasoning. We focus on systems that can be modeled as a collection of communicating processes. Each process owns its internal resources and a set of channels through which it communicates with other processes. The key idea is to quantify the guarantees obtained about the behavior of a process as a test level, which captures the constraints under which this guarantee is analyzed to be true. We design a novel proof system LabelBI based on the logic of bunched implications that enables rely-guarantee reasoning principles for a system of differently analyzed components. We develop trace semantics for this logic, against which we prove our logic is sound. We also prove cut elimination of our sequent calculus. We demonstrate the expressiveness of our logic via a case study.
- Research Article
- 10.1145/3763181
- Oct 9, 2025
- Proceedings of the ACM on Programming Languages
- Eric Mugnier + 3 more
Auto-active verifiers like Dafny aim to make formal methods accessible to non-expert users through SMT automation. However, despite the automation and other programmer-friendly features, they remain sparsely used in real-world software development, due to the significant effort required to apply them in practice. We interviewed 14 experienced Dafny users about their experiences using it in large-scale projects. We apply grounded theory to analyze the interviews to systematically identify how auto-active verification impacts software development, and to identify opportunities to simplify the use, and hence, expand the adoption of verification in software development.
- Research Article
- 10.1145/3763157
- Oct 9, 2025
- Proceedings of the ACM on Programming Languages
- Feifei Cheng + 5 more
Despite recent development of quantum program verification, it is still in its early stage, where many quantum programs are hard to verify due to their inherent probabilistic nature and parallelism in quantum superposition. We propose Qafny C , a system that compiles quantum program verification into a well-established classical program verifier Dafny, enabling the formal verification of quantum programs. The key insight behind Qafny C is the separation of quantum program verification from its execution, leveraging the strength of classical verifiers to ensure correctness before compiling certified quantum programs into executable circuits. Using Qafny C , we have successfully verified 37 diverse quantum programs by compiling their verification into Dafny. To the best of our knowledge, this is the most extensive formally verified set of quantum programs.
- Research Article
- 10.1017/cbp.2025.10003
- Oct 7, 2025
- Research Directions: Cyber-Physical Systems
- Pengyuan Lu + 4 more
Abstract Neural network (NN)-based control policies have proven their advantages in cyber-physical systems (CPS). When an NN-based policy fails to fulfill a formal specification, engineers leverage NN repair algorithms to fix its behaviors. However, such repair techniques risk breaking the existing correct behaviors, losing not only correctness but also verifiability of initial state subsets. That is, the repair may introduce new risks, previously unaccounted for. In response, we formalize the problem of Repair with Preservation (RwP) and develop Incremental Simulated Annealing Repair (ISAR). ISAR is an NN repair algorithm that aims to preserve correctness and verifiability — while repairing as many failures as possible. Our algorithm leverages simulated annealing on a barriered energy function to safeguard the already-correct initial states while repairing as many additional ones as possible. Moreover, formal verification is utilized to guarantee the repair results. ISAR is compared to a reviewed set of state-of-the-art algorithms, including (1) reinforcement learning based techniques (STLGym and F-MDP), (2) supervised learning-based techniques (MIQP and minimally deviating repair), and (3) online shielding techniques (tube MPC shielding). Upon evaluation on two standard benchmarks, OpenAI Gym mountain car and an unmanned underwater vehicle, ISAR not only preserves correct behaviors from previously verified initial state regions, but also repairs 81.4% and 23.5% of broken state spaces in the two benchmarks. Moreover, the signal temporal logic (STL) robustness of the ISAR-repaired policies is higher than the baselines.
- Research Article
- 10.62056/a0wa0lmol
- Oct 6, 2025
- IACR Communications in Cryptology
- Sabine Oechsner + 2 more
Computer-aided cryptography, with particular emphasis on formal verification, promises an interesting avenue to establish strong guarantees about cryptographic primitives. The appeal of formal verification is to replace the error-prone pen-and-paper proofs with a proof that was checked by a computer and, therefore, does not need to be checked by a human. In this paper, we ask the question of how reliable are these machine-checked proofs by analyzing a formally verified implementation of the Line-Point Zero-Knowledge (LPZK) protocol (Dittmer, Eldefrawy, Graham-Lengrand, Lu, Ostrovsky and Pereira, CCS 2023). The implementation was developed in EasyCrypt and compiled into OCaml code that was claimed to be high-assurance, i.e., that offers the formal guarantees of guarantees of completeness, soundness, and zero knowledge. We show that despite these formal claims, the EasyCrypt model was flawed, and the implementation (supposed to be high-assurance) had critical security vulnerabilities. Concretely, we demonstrate that: 1) the EasyCrypt soundness proof was incorrectly done, allowing an attack on the scheme that leads honest verifiers into accepting false statements; and 2) the EasyCrypt formalization inherited a deficient model of zero knowledge for a class of non-interactive zero knowledge protocols that also allows the verifier to recover the witness. In addition, we demonstrate 3) a gap in the proof of the perfect zero knowledge property of the LPZK variant of Dittmer, Ishai, Lu and Ostrovsky (CCS 2022) that the EasyCrypt proof is based, which, depending on the interpretation of the protocol and security claim, could allow a malicious verifier to learn the witness. Our findings highlight the importance of scrutinizing machine-checked proofs, including their models and assumptions. We offer lessons learned for both users and reviewers of tools like EasyCrypt, aimed at improving the transparency, rigor, and accessibility of machine-checked proofs. By sharing our methodology and challenges, we hope to foster a culture of deeper engagement with formal verification in the cryptographic community.
- Research Article
- 10.3390/sym17101659
- Oct 5, 2025
- Symmetry
- Xinfei Liao + 5 more
A graph simulation and its variants are widely used in graph pattern matching. Among them, there have been related works involving the addition of regular expressions to graph patterns, which can discover more meaningful data and solve problems in polynomial time. In this research, which is based on Fan’s investigations, we first propose an approximation of graph simulation using the concept of metric and formal verification techniques, and then give the definition of approximate matching between pattern graphs with regular expressions and data graphs, which introduces a symmetric tolerance for errors, bridging exact and approximate matching. Finally, we present a logical characterization of the approximate graph simulation by extending Hennessy–Milner logic.
- Research Article
- 10.3390/s25196144
- Oct 4, 2025
- Sensors (Basel, Switzerland)
- Jhury Kevin Lastre + 3 more
Cross-border Fifth Generation Mobile Communication (5G) roaming requires secure N32 connections between network operators via Security Edge Protection Proxy (SEPP) interfaces, but current Transport Layer Security (TLS) 1.3 implementations face a critical trade-off between connection latency and security guarantees. Standard TLS 1.3 optimization modes either compromise Perfect Forward Secrecy (PFS) or suffer from replay vulnerabilities, while full handshakes impose excessive latency penalties for time-sensitive roaming services. This research introduces Zero Round Trip Time Forward Secrecy (0-RTT FS), a novel protocol extension that achieves zero round-trip performance while maintaining comprehensive security properties, including PFS and replay protection. Our solution addresses the fundamental limitation where existing TLS 1.3 optimizations sacrifice security for performance in international roaming scenarios. Through formal verification using ProVerif and comprehensive performance evaluation, we demonstrate that 0-RTT FS delivers 195.0 μs handshake latency (only 17% overhead compared to insecure 0-RTT) while providing full security guarantees that standard modes cannot achieve. Security analysis reveals critical replay vulnerabilities in all existing standard TLS 1.3 optimization modes, which our proposed approach successfully mitigates. The research provides operators with a decision framework for configuring sub-millisecond secure handshakes in next-generation roaming services, enabling both optimal performance and robust security for global 5G connectivity.
- Research Article
- 10.1109/les.2025.3598202
- Oct 1, 2025
- IEEE Embedded Systems Letters
- Julian Göppert + 1 more
Formal Modeling and Verification of Generic Credential Management Processes for Industrial Cyber–Physical Systems
- Research Article
- 10.1016/j.scico.2025.103316
- Oct 1, 2025
- Science of Computer Programming
- Wahiba Bachiri + 2 more
Formal specification and SMT verification of quantized neural network for autonomous vehicles
- Research Article
- 10.1007/s42154-024-00308-w
- Oct 1, 2025
- Automotive Innovation
- Zhenhai Gao + 3 more
Abstract Autonomous driving technology faces significant safety challenges, particularly at unsignalized intersections. Centralized cooperative methods have been developed to manage the flow of connected and automated vehicles. However, many existing approaches depend on basic control algorithms, leading to lengthy inference time and suboptimal solutions. Consequently, real-time performance, road resource utilization, and traffic efficiency are compromised. While some studies have integrated reinforcement learning (RL) techniques to address these issues, they often comprise safety due to reward-driven optimization and the oversimplification of traffic scenarios, such as designing specific flow directions. These limitations raise concerns about their real-world applicability and safety. To address these shortcomings, this paper introduces a novel behavior-constrained proximal policy optimization (BCPPO) method for RL-based cooperative vehicle control at intersections. First, the problem is formulated as a multi-agent RL task within a Markov Game (MG) framework. A multi-agent proximal policy optimization (MAPPO) algorithm is proposed to handle the complex cooperative dynamics among multiple agents. The policy network employs a Long Short-Term Memory (LSTM) encoder to capture extensive social interaction information among the agents. Second, intersection control problem is formalized within the MG framework, and a safety-enhanced cooperative vehicle control strategy, BCPPO, is proposed. This method integrates formal safety verification and behavior constraints into the training and deployment of MAPPO to ensure safety and robustness. Finally, extensive simulation experiments are conducted across various intersection scenarios to evaluate the performance of BCPPO against RL-based proximal policy optimization (PPO), the rule-based first-come-first-served (FCFS) method, and the optimal control (OC)-based vehicles-intersection control system (VICS). The results demonstrate that BCPPO achieves a zero-collision rate during deployment and enhances driving comfort by 60.75%, compared to the non-safety-aware PPO method, which has a collision rate of about 13.85%. Furthermore, BCPPO improves traffic efficiency by 16.15% in comparison to FCFS and reduces inference time by a factor of 71.73 relative to the VICS method.
- Research Article
- 10.1016/j.engappai.2025.111266
- Oct 1, 2025
- Engineering Applications of Artificial Intelligence
- Xia Wang + 4 more
Formal verification for multi-agent path execution in stochastic environments
- Research Article
- 10.1109/tsmc.2025.3585039
- Oct 1, 2025
- IEEE Transactions on Systems, Man, and Cybernetics: Systems
- Jian Song + 5 more
An Innovative Formal Verification Method Based on Timed Petri Nets With Integrated Database Tables
- Research Article
- 10.1145/3770068
- Sep 30, 2025
- ACM Transactions on Intelligent Systems and Technology
- Luca Marzari + 4 more
Ensuring safety in reinforcement learning (RL) is critical for deploying agents in real-world applications. During training, current safe RL approaches often rely on indicator cost functions that provide sparse feedback, resulting in two key limitations: (i) poor sample efficiency due to the lack of safety information in neighboring states, and (ii) dependence on cost-value functions, leading to brittle convergence and suboptimal performance. After training, safety is guaranteed via formal verification methods for deep neural networks (FV), whose computational complexity hinders their application during training. We address the limitations of using cost functions via verification by proposing a safe RL method based on a violation value—the risk associated with policy decisions in a portion of the state space. Our approach verifies safety properties (i.e., state-action pairs) that may lead to unsafe behavior, and quantifies the size of the state space where properties are violated. This violation value is then used to penalize the agent during training to encourage safer policy behavior. Given the NP-hard nature of FV, we propose an efficient, sample-based approximation with probabilistic guarantees to compute the violation value. Extensive experiments on standard benchmarks and real-world robotic navigation tasks show that violation-augmented approaches significantly improve safety by reducing the number of unsafe states encountered while achieving superior performance compared to existing methods.