Articles published on Satisfiability modulo theories
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
809 Search results
Sort by Recency
- Research Article
- 10.1007/s10817-025-09746-5
- Dec 23, 2025
- Journal of Automated Reasoning
- Guilherme V Toledo + 2 more
Abstract This is the first part of an analysis of the interplay between multiple properties that are related to combination methodologies for theories in the field of satisfiability modulo theories. We here focus on Nelson-Oppen and polite theory combinations, leading to a total of five model-theoretic properties to be considered: stable infiniteness, smoothness, finite witnessability, strong finite witnessability, and convexity. Our first result is an improvement on polite theory combination, showing that it is possible when only assuming stable infiniteness and strong finite witnessability, and thus implying smoothness is not a prerequisite for this method. Second, we provide examples of Boolean combinations of the aforementioned 5 properties whenever they are possible (e.g., a theory that admits all the properties, a theory that admits none, etc.), sharp in the sense that no theories within simpler signatures may exhibit the exact same properties, and prove which combinations cannot occur. Among these examples, the most surprising one is that of a polite yet not strongly polite theory in one sort, a combination whose previous example in the literature was two-sorted.
- Research Article
- 10.1186/s42400-025-00393-2
- Dec 17, 2025
- Cybersecurity
- Fen Liu + 5 more
Abstract The cipher is a lightweight tweakable block cipher introduced at FSE 2019. Its design aims to incorporate countermeasures against Differential Fault Attacks at the algorithmic level. The cipher employs a lightweight and involutory S-box along with a simple linear layer, enabling efficient encryption and decryption operations. In particular, utilizes a straightforward tweakey schedule that generates four 64-bit round tweakeys, which are reused throughout the encryption process. Despite its lightweight design, the resistance of against impossible differential analysis has not been thoroughly evaluated, with limited attention from cryptanalysts in this regard. Hence, this paper presents a comprehensive analysis of specifically targeting its resistance to impossible differential cryptanalysis. By employing an Satisfiability Modulo Theory (SMT) based automatic search tool, we successfully identify both 12-round related-tweak impossible differential distinguishers and 15-round related-tweakey impossible differential distinguishers for , marking the first discovery of such distinguishers for this cipher. Our results indicate that the tweak in enhances the cipher’s flexibility, provided it receives appropriate attention. In addition, we conduct key-recovery attacks on reduced-round , successfully recovering the 128-bit keys for 20-round, 21-round, and 23-round variants. Based on our comprehensive analysis and experimental results, we conclude that demonstrates effective resistance against impossible differential cryptanalysis.
- Research Article
- 10.1103/v8f8-s11v
- Dec 15, 2025
- Physical Review Physics Education Research
- Lachlan Mcginness + 1 more
This paper explores the potential of large language models to accurately extract and translate equations from typed student responses into a standard format. This is a useful task as standardized equations can be graded reliably using a computer algebra system or a satisfiability modulo theories solver. Therefore physics instructors interested in automated grading would not need to rely on the mathematical reasoning capabilities of language models. We used two novel frameworks to improve the translations. The first is consensus where a pair of models verify the correctness of the translations. The second is a neurosymbolic LLM-modulo approach were models receive feedback from an automated reasoning tool. We performed experiments using responses to the Australian Physics Olympiad exam. We report on results, finding that no open-source model was able to translate the student responses at the desired level of accuracy. Future work could involve breaking the task into smaller components before parsing to improve performance or generalizing the experiments to translate hand-written responses.
- Research Article
- 10.3390/fi17120578
- Dec 15, 2025
- Future Internet
- Zuocheng Feng + 4 more
Concurrency bugs originate from complex and improper synchronization of shared resources, presenting a significant challenge for detection. Traditional static analysis relies heavily on expert knowledge and frequently fails when code is non-compilable. Conversely, large language models struggle with semantic sparsity, inadequate comprehension of concurrent semantics, and the tendency to hallucinate. To address the limitations of static analysis in capturing complex concurrency semantics and the hallucination risks associated with large language models, this study proposes ConSynergy. This novel framework integrates the structural rigor of static analysis with the semantic reasoning capabilities of large language models. The core design employs a robust task decomposition strategy that decomposes concurrency bug detection into a four-stage pipeline: shared resource identification, concurrency-aware slicing, data-flow reasoning, and formal verification. This approach fundamentally mitigates hallucinations from large language models caused by insufficient program context. First, the framework identifies shared resources and applies a concurrency-aware program slicing technique to precisely extract concurrency-related structural features, thereby alleviating semantic sparsity. Second, to enhance the large language model’s comprehension of concurrent semantics, we design a concurrency data-flow analysis based on Chain-of-Thought prompting. Third, the framework incorporates a Satisfiability Modulo Theories solver to ensure the reliability of detection results, alongside an iterative repair mechanism based on large language models that dramatically reduces dependency on code compilability. Extensive experiments on three mainstream concurrency bug datasets, including DataRaceBench, the concurrency subset of Juliet, and DeepRace, demonstrate that ConSynergy achieves an average precision and recall of 80.0% and 87.1%, respectively. ConSynergy outperforms state-of-the-art baselines by 10.9% to 68.2% in average F1 score, demonstrating significant potential for practical application.
- Research Article
- 10.65563/jeaai.v1i7.65
- Oct 31, 2025
- INNO-PRESS: Journal of Emerging Applied AI
- Wuyang Zhang + 6 more
Large language models (LLMs) have transformed software development by enabling automated code generation, yet they frequently suffer from systematic errors that limit practical deployment. We identify two critical failure modes: \textit{logical hallucination} (incorrect control/data-flow reasoning) and \textit{schematic hallucination} (type mismatches, signature violations, and architectural inconsistencies). These errors stem from the absence of explicit, queryable representations of repository-wide semantics. This paper presents \textbf{\framework}, a novel framework for code generation that addresses these limitations through knowledge graph-guided constraint satisfaction. Our approach proceeds in four integrated stages: (1) constructing heterogeneous repository knowledge graphs that capture both static analysis and dynamic execution traces; (2) learning neural query planners that extract task-relevant context from these graphs; (3) employing satisfiability modulo theories (SMT)-guided beam search to ensure generated code satisfies semantic constraints; and (4) maintaining graph fidelity through continual incremental updates. Our comprehensive evaluation on \dataset, a curated benchmark of 4,250 repository-level tasks across 50 Python projects, demonstrates significant improvements over state-of-the-art baselines: 49.8\% Pass@1 (18.1\% absolute improvement), 52\% reduction in schematic hallucination, and 31\% reduction in logical hallucination. Cross-repository generalization analysis shows strong transfer capabilities with only 4.3\% average performance degradation across architectural patterns. These results establish new benchmarks for repository-level code generation while providing theoretical foundations and practical tools for semantically-aware automated software development. The explicit semantic representation and constraint satisfaction framework introduced in \framework\ enables more reliable automated development tools and provides a foundation for future advances in AI-assisted software engineering.
- Research Article
1
- 10.3390/sym17101771
- Oct 21, 2025
- Symmetry
- Jintian Lu + 5 more
The End–Edge–Cloud (EEC) paradigm hierarchically orchestrates Internet of Things (IoT) devices, edge nodes, and cloud, optimizing system performance for both delay-sensitive data and compute-intensive processing tasks. Securing IoT data sharing in the EEC-driven paradigm while maintaining data traceability poses critical challenges. In this paper we propose STDSM, a symmetry-enhanced secure and traceable data sharing model for the EEC-driven data sharing paradigm. STDSM enables IoT data owners to share data securely by attaching symmetric security labels (for secrecy and integrity) to their data. This mechanism symmetrically controls both data outflow and inflow. Furthermore, STDSM can also track data user identity. Subsequently, the security properties of STDSM, including data confidentiality, integrity, and identity traceability, are formally verified; the verification takes 280 ms, using a novel approach that combines High-Level Petri Net modeling with the satisfiability modulo theories library and the Z3 solver. In addition, our experimental results show that STDSM reduces time overhead by up to 15% while providing enhanced traceability.
- Research Article
- 10.1145/3737293
- Oct 13, 2025
- ACM Transactions on Cyber-Physical Systems
- Ziyan An + 4 more
Smart cities operate on computational predictive frameworks that collect aggregate and utilize data from large-scale sensor networks. However these frameworks are prone to multiple sources of data and algorithmic bias which often lead to unfair prediction results. In this work we first demonstrate that bias persists at a micro-level both temporally and spatially by studying real city data from Chattanooga TN. To alleviate the issue of such bias we introduce FairGuard a micro-level temporal logic-based approach for fair smart city policy adjustment and generation in complex temporal-spatial domains. The FairGuard framework consists of two phases. First we develop a static generator that is able to reduce data bias based on temporal logic conditions by minimizing correlations between selected attributes. Second to ensure fairness in predictive algorithms we design a dynamic component to regulate prediction results and generate future fair predictions by harnessing logic rules. To navigate potential conflicts among these single fairness rules including logical contradictions and data interference we formulate detection strategies grounded in Satisfiability Modulo Theories (SMT) across both logic and data levels. Furthermore acknowledging the limitations of fairness rules focused on a single attribute we enhance the Static FairGuard to accommodate heterogeneous fairness rules that simultaneously consider multiple protected attributes. In addition we develop an interactive online visualizer that displays the adjustments made to correct unfair city states thereby improving fairness alongside the prediction outcomes from the dynamic component. Evaluations showcase that logic-enabled Static FairGuard can effectively reduce the biased correlations while Dynamic FairGuard can guarantee fairness on protected groups at runtime with minimal impact on overall performance.
- Research Article
- 10.1145/3763093
- Oct 9, 2025
- Proceedings of the ACM on Programming Languages
- Maolin Sun + 3 more
Satisfiability Modulo Theories (SMT) solvers are widely used for program analysis and other applications that require automated reasoning. Rewrite systems, as crucial integral components of SMT solvers, are responsible for simplifying and transforming formulas to optimize the solving process. The effectiveness of an SMT solver heavily depends on the robustness of its rewrite system, making its validation crucial. Despite ongoing advancements in SMT solver testing, rewrite system validation remains largely unexplored. Our empirical analysis reveals that developers invest significant effort in ensuring the correctness and reliability of rewrite systems. However, existing testing techniques do not adequately address this aspect. In this paper, we introduce Aries, a novel technique designed to validate SMT solver rewrite systems. First, Aries employs mimetic mutation, a targeted strategy that actively reshapes input formulas to provoke and diversify rewrite opportunities. By aligning mutated terms with known rewrite patterns, Aries can conduct a thorough exploration of the rewrite space in the following phase. Second, Aries utilizes deductive rewriting, leveraging generative equality saturation to effectively explore rewrite space and produce semantically equivalent mutants for the purpose of validation. We implemented Aries as a practical validation tool and evaluated it on leading SMT solvers, including Z3 and cvc5. Our experiments demonstrate that Aries effectively identifies bugs, with 27 new issues detected, of which 22 have been confirmed or fixed by developers. Most of these issues involve the rewrite systems, highlighting Aries's strength in exploring the rewrite space.
- Research Article
- 10.1145/3763120
- Oct 9, 2025
- Proceedings of the ACM on Programming Languages
- Ivana Bocevska + 4 more
We propose a compositional approach to combine and scale automated reasoning in the static analysis of decentralized system security, such as blockchains. Our focus lies in the game-theoretic security analysis of such systems, allowing us to examine economic incentives behind user actions. In this context, it is particularly important to certify that deviating from the intended, honest behavior of the decentralized protocol is not beneficial: as long as users follow the protocol, they cannot be financially harmed, regardless of how others behave. Such an economic analysis of blockchain protocols can be encoded as an automated reasoning problem in the first-order theory of real arithmetic, reducing game-theoretic reasoning to satisfiability modulo theories (SMT). However, analyzing an entire game-theoretic model (called a game) as a single SMT instance does not scale to protocols with millions of interactions. We address this challenge and propose a divide-and-conquer security analysis based on compositional reasoning over games. Our compositional analysis is incremental: we divide games into subgames such that changes to one subgame do not necessitate re-analyzing the entire game, but only the ancestor nodes. Our approach is sound, complete, and effective: combining the security properties of subgames yields security of the entire game. Experimental results show that compositional reasoning discovers intra-game properties and errors while scaling to games with millions of nodes, enabling security analysis of large protocols.
- Research Article
1
- 10.1145/3763163
- Oct 9, 2025
- Proceedings of the ACM on Programming Languages
- Yihe Li + 2 more
Large Language Models (LLMs) have emerged as a promising alternative to traditional static program analysis methods, such as symbolic execution, offering the ability to reason over code directly without relying on theorem provers or SMT solvers. However, LLMs are also inherently approximate by nature, and therefore face significant challenges in relation to the accuracy and scale of analysis in real-world applications. Such issues often necessitate the use of larger LLMs with higher token limits, but this requires enterprise-grade hardware (GPUs) and thus limits accessibility for many users. In this paper, we propose LLM-based symbolic execution —a novel approach that enhances LLM inference via a path-based decomposition of the program analysis tasks into smaller (more tractable) subtasks. The core idea is to generalize path constraints using a generic code-based representation that the LLM can directly reason over, and without translation into another (less-expressive) formal language. We implement our approach in the form of AutoBug, an LLM-based symbolic execution engine that is lightweight and language-agnostic, making it a practical tool for analyzing code that is challenging for traditional approaches. We show that AutoBug can improve both the accuracy and scale of LLM-based program analysis, especially for smaller LLMs that can run on consumer-grade hardware.
- Research Article
- 10.1145/3763167
- Oct 9, 2025
- Proceedings of the ACM on Programming Languages
- Henrik Böving + 10 more
Bit-blasting SMT solvers enable efficient automatic reasoning about bitvectors, which are fundamental for the verification of compiler backends, cryptographic algorithms, hardware designs and other soft- or hardware tasks. Despite the clear demand for efficient bitvector reasoning infrastructure and the impressive advancements in state-of-the-art bit-blasting SMT solvers such as Bitwuzla, effective bitvector reasoning within interactive theorem provers (ITPs) remains a challenge, hindering their use for mechanized proofs. Incomplete bitvector libraries, unavailable or only partially integrated decision procedures, complex and hard-to-bitblast operations, and limited integration with the host language prevent the wide adoption of bitvector reasoning in proving contexts. We introduce bv_decide: the first end-to-end verified bitblaster designed for interactive bitvector reasoning in a dependently-typed ITP . Our verified bitblaster is scalable, comes with a complete end-to-end proof (trusting only the Lean compiler and kernel), and is available as a proof tactic that allows interactive reasoning right from within a programming language, in our case Lean. We use Lean’s Functional But In-Place (FBIP) paradigm to efficiently encode our core data structures (e.g., AIGs), demonstrating that fast execution of an SMT solver need not come at the expense of rigorous formalization. We enable dependable interactive verification of user-written-code by basing Lean’s C-Style standard dataypes UInt/SInt on our bitvector type, adding a lowering from enums and structs to bitvectors to enable transparent bit-blasting support for composed types, and by offering an interactive tactic that either solves a goal or provides a counter-example. Moreover, we present the design of Lean’s canonical bitvector library, which supports all operations (with reasoning principles) for the SMT-LIB 2.7 standard (including overflow modeling), is fast-to-execute, and offers a comprehensive API and automation for bit-width-independent reasoning. We thoroughly evaluate our bit-blaster on a comprehensive set of benchmarks, including the full SMT-LIB dataset, where bv_decide solves more theorems than the state-of-the-art in verified bit-blasting, CoqQFBV. We also verify over 7000 SMT statements extracted from LLVM, providing the largest mechanized verification of LLVM rewrites to date, to our knowledge. By making bit-blasting bitvector reasoning a polished, well-supported, and interactive feature of modern ITPs, we enable effective, dependable white-box reasoning for bitvector-level verification.
- Research Article
- 10.1145/3759917
- Sep 26, 2025
- ACM Transactions on Embedded Computing Systems
- Beatrice Melani + 2 more
Signal Temporal Logic (STL) is a widely recognized formal specification language to express rigorous temporal requirements on mixed analog signals produced by cyber-physical systems (CPS). A relevant problem in CPS design is how to efficiently and automatically check whether a set of STL requirements is logically consistent. This problem reduces to solving the STL satisfiability problem, which is decidable when we assume that our system operates in discrete time steps dictated by an embedded system’s clock. This article introduces a novel tree-shaped, one-pass tableau method for satisfiability checking of discrete-time STL with bounded temporal operators. Originally designed to prove the consistency of a given set of STL requirements, this method has a wide range of applications beyond consistency checking. These include synthesizing example signals that satisfy the given requirements, as well as verifying or refuting the equivalence and implications of STL formulas. Our tableau exploits redundancy arising from large time intervals in STL formulas to speed up satisfiability checking, and can also be employed to check Mission-Time Linear Temporal Logic (MLTL) satisfiability. We compare our tableau with Satisfiability Modulo Theories (SMT) and First-Order Logic encodings from the literature on a benchmark suite, partly collected from the literature, and partly provided by an industrial partner. Our experiments show that, in many cases, our tableau outperforms state-of-the-art encodings.
- Research Article
- 10.1145/3760528
- Sep 26, 2025
- ACM Transactions on Embedded Computing Systems
- Debarpita Banerjee + 3 more
Real-time scheduling of multiple control tasks in a weakly hard setting is an emerging research direction, as it offers a more flexible and feasible environment for task scheduling. This is especially pertinent for resource-constrained embedded applications where tasks are allowed to miss a few deadlines for prudent sharing of computational resources. However, a control task missing its deadline could result in the system being unsafe or unstable. A significant amount of research efforts have been reported in the literature addressing the schedulability of control tasks while preserving the stability or safety. However, all of them focus on a stable schedule or a safe schedule, but not both the safety and stability aspects together. In this work, we ensure both control stability and control safety to generate a safe and stable schedule for a weakly hard task system. In particular, we gradually endorse stability, safety, and schedulability, where we first synthesize a weakly hard constraint that preserves the desired stability of each control task. Next, we correlate stability with control safety and establish some mathematical results that guarantee control safety for an unbounded time horizon, unlike the existing methods. Finally, by leveraging Satisfiability Modulo Theories (SMT) , we synthesize the schedule that ensures control stability and safety while minimizing the worst-case response time of all the tasks, in a time-efficient way. To our knowledge, this is the first work to address stability, safety, and schedulability together for weakly hard control task systems. We validate our method through extensive experiments using standard automotive benchmarks. In addition, we demonstrate the efficiency of the proposed method in comparison with some of the state-of-the-art techniques, as well as highlight its scalability, thereby establishing its applicability in real-world scenarios.
- Research Article
- 10.1080/00207543.2025.2561190
- Sep 17, 2025
- International Journal of Production Research
- Douha Macherki + 4 more
Industrial companies increasingly face frequent and complex changes requiring rapid adaptation. These changes may be internal–such as equipment breakdowns–or external, such as the arrival of urgent orders or delayed deliveries. To address such variations, companies can implement actions ranging from simple adjustments to process logic (e.g., equipment re-parameterisation or rescheduling) to more substantial modifications, such as altering the composition or layout of production system equipment. This research focuses on the latter case, specifically on system reconfiguration. With the growing complexity of production systems and the critical nature of related challenges (e.g., safety, financial, and technological concerns), the reconfiguration process demands effective decision-support tools. This underscores the need to integrate self-reconfiguration capabilities into production systems. In this paper, we propose a self-reconfiguration algorithm composed of three phases: (1) detection of the need for reconfiguration, (2) diagnosis of the need and search for alternative solutions, and (3) definition of a new layout. To automate the resolution of the reconfiguration problem, we formulate it as a constraint satisfaction problem, modelled using Satisfiability Modulo Theories (SMT) logic equations and solved with an SMT solver. A case study is provided to illustrate the feasibility and potential of the proposed self-reconfiguration approach and algorithm.
- Research Article
- 10.1007/s10626-025-00420-x
- Aug 23, 2025
- Discrete Event Dynamic Systems
- Lulu He + 2 more
Abstract In this article, we focus on improving the efficiency of diagnosability checking for real-time systems modeled as timed automata. Inspired by a recently introduced extension of the classic CEGAR (CounterExample-Guided Abstraction Refinement) algorithm, namely the RECAR (Recursive Explore and Check Abstraction Refinement) algorithm, we propose new RECAR-like algorithms that combine over-approximation and under-approximation techniques. We use CEGAR to quickly terminate the refinement loop by over-approximation and under-approximation, in the case where the original formula is respectively satisfiable or unsatisfiable, and then show the soundness of our RECAR-like approach applied to an arbitrary formula. We define then several types of parameterized over- and under-approximations along with refinement strategies for the diagnosability problem. Finally, we evaluate the effectiveness of our method and its implementation with the Z3 SMT solver on different benchmarks by comparing it to the direct method without approximation shortcuts.
- Research Article
- 10.1007/s00236-025-00495-x
- Aug 4, 2025
- Acta Informatica
- Zhengyang John Lu + 6 more
Abstract Modern SMT solvers, such as Z3, allow solver users to customize strategies to improve performance on their specific use cases. However, handcrafting an optimized strategy for a specific class of SMT instances remains a complex and demanding task for both solver developers and users alike. In this paper, we address the problem of automated SMT strategy synthesis via a novel method based on Monte-Carlo Tree Search (MCTS). We formulate strategy synthesis as a sequential decision-making process, where the search tree corresponds to the strategy space. Subsequently, we employ MCTS to navigate this vast search space. Compared to the conventional MCTS, we introduce two heuristics—layered and staged search—that enable our method to identify effective strategies with lower costs. We implement our method, dubbed Z3alpha, upon the Z3 SMT solver. Our experiments demonstrate that Z3alpha outperforms the default Z3 solver and the state-of-the-art synthesis tool Fastsmt on the majority of the evaluated benchmark sets, while producing more interpretable strategies than FastSMT. At SMT-COMP’24, among the 16 participating logics, Z3alpha improved upon the default Z3 in 12 cases and helped solve hundreds more instances in QF_NIA and QF_NRA, winning their respective divisions.
- Research Article
- 10.1609/socs.v18i1.36014
- Jul 20, 2025
- Proceedings of the International Symposium on Combinatorial Search
- Pavel Surynek + 3 more
We address the problem of object arrangement and scheduling for sequential 3D printing. Unlike the standard 3D printing, where all objects are printed slice by slice, in sequential 3D printing, objects are completed one after another. In the sequential case, it is necessary to ensure that the moving parts of the printer do not collide with previously printed objects. We propose to express the problem of sequential printing as a linear arithmetic formula, which is then solved using a solver for satisfiability modulo theories (SMT) combined with counterexample guided abstraction refinement (CEGAR).
- Research Article
1
- 10.1613/jair.1.16870
- Jul 12, 2025
- Journal of Artificial Intelligence Research
- Gabriele Masina + 2 more
Modern SAT and SMT solvers are designed to handle problems expressed in Conjunctive Normal Form (CNF) so that non-CNF problems must be CNF-ized upfront, typically by using variants of either Tseitin or Plaisted and Greenbaum transformations. When passing from plain solving to enumeration, however, the capability of producing partial satisfying assignments that are as small as possible becomes crucial, which raises the question of whether such CNF encodings are also effective for enumeration. In this paper, we investigate both theoretically and empirically the effectiveness of CNF conversions for SAT and SMT enumeration. On the negative side, we show that: (i) Tseitin transformation prevents the solver from producing short partial assignments, thus seriously affecting the effectiveness of enumeration; (ii) Plaisted and Greenbaum transformation overcomes this problem only in part. On the positive side, we prove theoretically and we show empirically that combining Plaisted and Greenbaum transformation with NNF preprocessing upfront —which is typically not used in solving— can fully overcome the problem and can drastically reduce both the number of partial assignments and the execution time.
- Research Article
- 10.15276/hait.08.2025.11
- Jun 27, 2025
- Herald of Advanced Information Technology
- Mykola A Hodovychenko + 1 more
Automated refactoring plays a crucial role in the maintenance and evolution of object-oriented software systems, where improving internal code structure directly impacts maintainability, scalability, and technical debt reduction. This paper presents an extended review of current approaches to automated refactoring, emphasizing methodological foundations, automation levels, the application of artificial intelligence, and practical integration into CI/CD workflows. We examine rule-based, graph-based, machine learning–based (CNNs, GNNs, LLMs), and history-aware (MSR) techniques, along with hybrid systems incorporating human-in-the-loop feedback.The taxonomy of refactoring types is aligned with established terminology–particularly Fowler’s classification–distinguishing structural, semantic (architectural), and behavioral transformations, all grounded in the principle of behavior preservation. Formal models are introduced to describe refactorings as graph transformations governed by preconditions and postconditions that ensure semantic equivalence between program versions.The paper provides a concrete example of a transformation generated by the DeepSmells tool, demonstrating the «before/after»change and explaining the rationale behind the AI-driven recommendation. The study also addresses the challenges of explainability and semantic drift, proposing mitigation strategies such as SHAP-based analysis, attention visualization in transformer architectures, integration with formal verification tools (e.g., SMT solvers, symbolic execution), and explainable AI recommendations.Special attention is given to the limitations of automated refactoring in dynamically typed languages (e.g., Python, JavaScript), where the lack of statictype information reduces the effectiveness of traditional techniques. Generalization to multilingual systems is supported through models like CodeBERT, CodeT5, and PLBART, which operate over token-level, syntactic, and graph-based representations to enable language-agnostic refactoring.The paper also discusses real-world integration of automated refactoring into CI/CD environments, including the use of bots, refactoring-aware quality gates, and scheduled transformations applied at commit or merge time. Practical examples illustrate the verification of behavior preservation through regression testing or formal methods.This work targets software engineers, researchers, and tool developers engaged in intelligent software maintenance and automated quality assurance. By offering a consolidated classification, tool selection criteria, and practical scenarios, the paper delivers applied value for designingcustom refactoring solutions or adopting existing technologies across diverse project constraints–ranging from safety-critical systems to large-scale continuous delivery pipelines.
- Research Article
- 10.3390/electronics14132575
- Jun 26, 2025
- Electronics
- Changsheng Chen + 4 more
Time-triggered Ethernet combines time-triggered and event-triggered communication, and is suitable for fields with high real-time requirements. Aiming at the problem that the traditional scheduling algorithm is not effective in scheduling event-triggered messages, a message scheduling algorithm based on multi-agent reinforcement learning (MADDPG, Multi-Agent Deep Deterministic Policy Gradient) and a hybrid algorithm combining SMT (Satisfiability Modulo Theories) solver and MADDPG are proposed. This method aims to optimize the scheduling of event-triggered messages while maintaining the uniformity of time-triggered message scheduling, providing more time slots for event-triggered messages, and reducing their waiting time and end-to-end delay. Through the designed scheduling software, in the experiment, compared with the SMT-based algorithm and the traditional DQN (Deep Q-Network) algorithm, the new method shows better load balance and lower message jitter, and it is verified in the OPNET simulation environment that it can effectively reduce the delay of event-triggered messages.