Related Topics
Articles published on Control flow
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
4104 Search results
Sort by Recency
- New
- Research Article
- 10.3390/math14030452
- Jan 28, 2026
- Mathematics
- Yajun Gao + 2 more
RTL-level fuzz testing is critical for identifying vulnerabilities in hardware designs. However, existing hardware fuzz testing methods suffer from slow coverage improvement and blind exploration due to the lack of fine-grained control flow guidance. To address this gap, this article proposes the CFGuide-Fuzz framework, which includes control node extraction and compression techniques based on FIRRTL instrument and a hardware fuzz engine driven by feature feedback. This research introduces a fine-grained control flow feedback mechanism for hardware fuzz testing, enabling a pivotal shift from blind exploration to targeted testing. Experimental results demonstrate that compared to the DiFuzzRTL baseline, the proposed CFGuide-Fuzz framework enhances register state coverage by 9.4% under identical iteration counts and testing environments. Additionally, it doubles the number of effective inputs that trigger mismatched differential test results. These findings fully validate the framework’s dual advantages: deeper hardware control flow exploration and higher semantic vulnerability triggering efficiency.
- New
- Research Article
- 10.62970/ijirct.v12.i1.2601020
- Jan 28, 2026
- International Journal of Innovative Research and Creative Technology
- Vignesh Alagappan -
Residential heating, ventilation, air conditioning (HVAC), and water heating systems account for approximately 51% of total household energy consumption in the United States, representing over 5.5 quadrillion BTUs annually [1]. Despite widespread adoption of connected thermostats and smart water heaters, contemporary residential energy management platforms remain fundamentally constrained by device-centric architectures that lack semantic interoperability, suffer from sparse telemetry collection, and operate without predictive optimization capabilities. These systems function as isolated control points rather than as integrated climate ecosystems capable of responding to building thermal dynamics, occupant behavior patterns, distributed energy resource availability, and grid conditions. This paper introduces a comprehensive reference architecture for Climate Intelligence Systems (CIS) that transcends current limitations through four foundational pillars: cryptographically anchored device identity frameworks, metadata-driven equipment modeling hierarchies, cloud-hosted digital twin simulation environments, and predictive machine learning optimization pipelines [2], [3]. The proposed architecture enables anticipatory comfort management that pre-conditions spaces based on forecast weather patterns and predicted occupancy, orchestrates distributed energy resources including rooftop photovoltaic arrays and battery storage systems, and provides proactive grid-responsive demand flexibility without compromising occupant comfort or safety. We present a complete four-layer architectural model encompassing device/field infrastructure, connectivity/identity frameworks, cloud intelligence platforms, and human-facing experience layers. The architecture is augmented with detailed system interaction diagrams, digital twin synchronization pipelines, and demand response control flows that demonstrate practical implementation patterns. Preliminary deployment insights indicate 18-24% reductions in compressor short-cycling events, 12-15% improvements in thermal prediction accuracy under varying weather conditions, and 35-42% increases in reliable demand response participation compared to rule-based approaches. The resulting framework provides a coherent, cryptographically secure, and operationally scalable climate management ecosystem that addresses fundamental architectural limitations in today's smart home platforms while establishing a foundation for next-generation residential cyber-physical systems capable of supporting both individual household optimization and grid-scale energy orchestration.
- New
- Research Article
- 10.1145/3785670
- Jan 19, 2026
- ACM Transactions on Reconfigurable Technology and Systems
- Erwei Wang + 21 more
General-purpose compilers abstract away parallelism, locality, and synchronization, limiting their effectiveness on modern spatial architectures. As modern computing architectures increasingly rely on fine-grained control over data movement, execution order, and compute placement for performance, compiler infrastructure must provide explicit mechanisms for orchestrating compute and data to fully exploit such architectures. We introduce MLIR-AIR, a novel, open-source compiler stack built on MLIR that bridges the semantic gap between high-level workloads and fine-grained spatial architectures such as AMD’s NPUs. MLIR-AIR defines the AIR dialect, which provides structured representations for asynchronous and hierarchical operations across compute and memory resources. AIR primitives allow the compiler to orchestrate spatial scheduling, distribute computation across hardware regions, and overlap communication with computation without relying on ad hoc runtime coordination or manual scheduling. We demonstrate MLIR-AIR’s capabilities through two case studies: matrix multiplication and the multi-head attention block from the LLaMA 2 model. For matrix multiplication, MLIR-AIR achieves up to 78.7% compute efficiency and generates implementations with performance almost identical to state-of-the-art, hand-optimized matrix multiplication written using the lower-level, close-to-metal MLIR-AIE framework. For multi-head attention, we demonstrate that the AIR interface supports fused implementations using approximately 150 lines of code, enabling tractable expression of complex workloads with efficient mapping to spatial hardware. MLIR-AIR transforms high-level structured control flow into spatial programs that efficiently utilize the compute fabric and memory hierarchy of an NPU, leveraging asynchronous execution, tiling, and communication overlap through compiler-managed scheduling.
- New
- Research Article
- 10.1145/3785005
- Jan 19, 2026
- ACM Transactions on Autonomous and Adaptive Systems
- Runan Wang + 4 more
Serverless function compositions subject to unpredictable faults are challenging to evaluate for root-cause analysis. Even though distributed tracing provides observations at multiple levels of granularity for troubleshooting, excessive code instrumentation increases the tracing overheads in terms of both computation and storage. Therefore, developers face the challenge of where and how to instrument serverless functions to maximize the likelihood of locating faults based on tracing data while minimizing tracing overhead and costs. In this paper, we propose a methodology to instrument an application with code-level tracing to infer the location of faults, taking into account constraints in terms of the maximum cost of the instrumentation and testing. We encode the tracing probe placement based on the control flow graph of the application and devise heuristics-based tracing data collection strategies to relate possible probe placements with their ability to locate a fault. Then we train novelty detection models to identify the internal anomalies and present an enhanced global search algorithm that automatically computes a probe placement with optimal fault localization ability versus cost. Experimental results show high performance in locating single and multiple faults with over 90% recall score for up to 15% latency anomalies, with minimal instrumentation overhead.
- Research Article
- 10.29019/58qfee29
- Jan 6, 2026
- Economía y Negocios
- Manuel Daniel Halanocca Masco + 3 more
This study examines the impact of digital payment platforms on the cash flow of small and medium-sized enterprises (SMEs) in Juliaca, Peru. Using a quantitative, cross-sectional correlational design, the relationship between the adoption of tools like Yape, Plin, and Mercado Pago and three key dimensions was evaluated: income variation, payment control, and cash flow management efficiency. A random sample of 50 SMEs was analyzed. Results revealed significant positive correlations between digital platform usage and improvements in cash flow (r = 0.796; p < 0.01), income variation (r = 0.776; p < 0.01), payment control (r = 0.728; p < 0.01), and management efficiency (r = 0.688; p < 0.01). The findings suggest that these tools enhance financial planning, reduce transaction errors, and increase liquidity, supporting financial stability in complex economic environments.
- Research Article
- 10.1016/j.jmat.2026.101177
- Jan 1, 2026
- Journal of Materiomics
- Bing He + 7 more
Multirole collaborative and co-constructive materials design ecosystem enabled by using control and data flows decoupled workflows
- Research Article
- 10.51244/ijrsi.2025.12120100
- Jan 1, 2026
- International Journal of Research and Scientific Innovation
- Ajay Singh Naruka + 1 more
This paper presents the design, modeling, and optimization of an Artificial Intelligence (AI)-based solar-powered electric vehicle (SPEV). While solar-electric propulsion promises clean and sustainable mobility, practical range and reliability are constrained by intermittent irradiance, battery degradation, and dynamic driving patterns. We integrate machine learning and control intelligence across three pillars: (1) energy harvesting and power conversion, (2) battery health-aware energy management, and (3) driver/route assistance. A block-level architecture is proposed along with an AI control flow for multi-objective optimization—maximizing range, preserving State of Health (SOH), and minimizing lifecycle cost. We develop a simulation framework and demonstrate improvements in energy efficiency, range, and charge/discharge smoothness compared with a rule-based baseline. Results indicate up to 12–22% efficiency gains across typical urban duty cycles. We conclude with deployment considerations, limitations, and future research directions.
- Research Article
- 10.1007/s10883-025-09750-3
- Dec 22, 2025
- Journal of Dynamical and Control Systems
- Fritz Colonius + 1 more
Abstract For nonautonomous control systems with compact control range, associated control flows are introduced. This leads to several skew product flows with various base spaces. The controllability and chain controllability properties are studied and related to properties of the associated skew product flows.
- Research Article
- 10.3390/make8010002
- Dec 21, 2025
- Machine Learning and Knowledge Extraction
- Hossein Shokouhinejad + 3 more
The increasing sophistication of malware has challenged the effectiveness of conventional detection techniques, motivating the adoption of Graph Neural Networks (GNNs) for their ability to model the structural and semantic information embedded in control flow graphs. While GNNs offer high detection performance, their lack of transparency limits their applicability in security-critical domains. To address this, we present an explainable malware detection framework, which contains a dual explainer. This dual explainer integrates a GNN explainer with a neural subgraph matching approach and the VF2 algorithm. The proposed method identifies and verifies discriminative subgraphs during training, which are later used to explain new predictions through efficient matching. To enhance the generalization of the neural subgraph matcher, we train it using curriculum learning, gradually increasing subgraph complexity to improve matching quality. Experimental evaluations on benchmark datasets demonstrate that the proposed framework retains high classification accuracy while significantly improving interpretability. By unifying explainable graph learning techniques with subgraph matching, the proposed framework enables analysts to gain actionable insights, fostering greater trust in GNN-based malware detectors.
- Research Article
- 10.3390/app16010012
- Dec 19, 2025
- Applied Sciences
- Longhao Ao + 1 more
Code search has received significant attention in the field of computer science research. Its core objective is to retrieve the most semantically relevant code snippets by aligning the semantics of natural language queries with those of programming languages, thereby contributing to improvements in software development quality and efficiency. As the scale of public code repositories continues to expand rapidly, the ability to accurately understand and efficiently match relevant code has become a critical challenge. Furthermore, while numerous studies have demonstrated the efficacy of deep learning in code-related tasks, the mapping and semantic correlations are often inadequately addressed, leading to the disruption of structural integrity and insufficient representational capacity during semantic matching. To overcome these limitations, we propose the Functional Program Graph for Code Search (called FPGraphCS), a novel code search method that leverages the construction of functional program graphs and an early fusion strategy. By incorporating abstract syntax tree (AST), data dependency graph (DDG), and control flow graph (CFG), the method constructs a comprehensive multigraph representation, enriched with contextual information. Additionally, we propose an improved metapath aggregation graph neural network (IMAGNN) model for the extraction of code features with complex semantic correlations from heterogeneous graphs. Through the use of metapath-associated subgraphs and dynamic metapath selection via a graph attention mechanism, FPGraphCS significantly enhances its search capability. The experimental results demonstrate that FPGraphCS outperforms existing baseline methods, achieving an MRR of 0.65 and ACC@10 of 0.842, showing a significant improvement over previous approaches.
- Research Article
- 10.3390/info16121109
- Dec 16, 2025
- Information
- Yaogang Lu + 2 more
High-quality test cases are vital for ensuring software reliability and security. However, existing symbolic execution tools generally rely on single-path search strategies, have limited feature extraction capability, and exhibit unstable model predictions. These limitations make them prone to local optima in complex or cross-scenario tasks and hinder their ability to balance testing quality with execution efficiency. To address these challenges, this paper proposes a Deep Active Ensemble Learning Framework for symbolic execution path exploration. During training, the framework integrates active learning with ensemble learning to reduce annotation costs and improve model robustness, while constructing a heterogeneous model pool to leverage complementary model strengths. In the testing stage, a dynamic ensemble mechanism based on sample similarity adaptively selects the optimal predictive model to guide symbolic path exploration. In addition, a gated graph neural network is employed to extract structural and semantic features from the control flow graph, improving program behavior understanding. To balance efficiency and coverage, a dynamic sliding window mechanism based on branch density enables real-time window adjustment under path complexity awareness. Experimental results on multiple real-world benchmark programs show that the proposed framework detects up to 16 vulnerabilities and achieves a cumulative 27.5% increase in discovered execution paths in hybrid fuzzing. Furthermore, the dynamic sliding window mechanism raises the F1 score to 93%.
- Research Article
- 10.1038/s41598-025-31209-5
- Dec 11, 2025
- Scientific reports
- Ping Dai + 3 more
Current software defect prediction and code quality assessment methods treat these inherently related tasks independently, failing to leverage their complementary information. Existing graph-based approaches lack the ability to jointly model structural dependencies and quality characteristics, limiting their effectiveness in capturing the complex relationships between defect patterns and code quality indicators. This paper proposes a novel integrated model that simultaneously tackles both objectives using graph neural networks to leverage the inherent graph structure of software systems. Our novelty lies in the first-of-its-kind integration of multi-level graph representations (AST, CFG, DFG) with a dual-branch attention-based GNN architecture for simultaneous defect prediction and quality assessment. Our approach constructs multi-level graph representations by integrating abstract syntax trees, control flow graphs, and data flow graphs, capturing both syntactic and semantic relationships in source code. The proposed dual-branch GNN architecture employs shared representation learning with attention mechanisms and multi-task optimization to exploit complementary information between defect prediction and quality assessment tasks. Comprehensive experiments on six real-world software projects demonstrate significant improvements over traditional methods, achieving F1-scores of 0.811 and AUC values of 0.896 for defect prediction, while showing 9.3% average improvement in code quality assessment accuracy across multiple quality dimensions. The integration strategy proves effective in capturing complex structural dependencies and provides actionable insights for software development teams, establishing a foundation for intelligent software engineering tools that deliver comprehensive code analysis capabilities.
- Research Article
- 10.3390/electronics14244852
- Dec 10, 2025
- Electronics
- Afef Kchaou + 2 more
Multi-bit upsets (MBUs) are a growing reliability threat in high-density SDRAM, particularly in radiation-prone embedded systems. This paper presents a large-scale FPGA-based fault injection (FI) study targeting external SDRAM in a cache-enabled LEON3 SPARC V8 processor, with over 300,000 dual-bit MBUs injected across three diverse workloads: Fast Fourier transform (FFT), matrix multiplication (MulMatrix), and advanced encryption standard (AES). Our results reveal a profound dependence of MBU manifestation on application semantics: memory-intensive benchmarks (FFT, MulMatrix) exhibit high fault detectability through data store and access exceptions, while the AES workload demonstrates exceptional intrinsic masking, with the vast majority of MBUs producing no observable effect. These results demonstrate that processor vulnerability to MBUs is not uniform but fundamentally shaped by workload characteristics, including memory access patterns, control flow regularity, and algorithmic redundancy. The study provides a hardware-validated foundation for designing workload-aware fault tolerance strategies in space-grade and safety-critical embedded platforms.
- Research Article
- 10.47191/etj/v10i12.01
- Dec 8, 2025
- Engineering and Technology Journal
- Nurlaelah
Inflation and global economic uncertainty have greatly impacted the financial outcomes of construction projects in Indonesia. Increasing material costs, exchange rate volatility, and delayed payments have disrupted cash flow and heightened financing risks. This paper highlights the role of financial optimization as a responsive strategy to manage unpredictable economic challenges. Drawing from a conceptual framework and previous research, the article outlines four fundamental pillars of financial optimization for construction projects: risk-based financial planning, adaptive cost control, diversified financing, and dynamic cash flow management. Applying these strategies is expected to improve efficiency, financial resilience, and project sustainability. Consequently, building financial management capabilities is crucial for construction companies to sustain stability during economic uncertainty.
- Research Article
- 10.30871/jaic.v9i6.11090
- Dec 5, 2025
- Journal of Applied Informatics and Computing
- Usman Nurhasan + 2 more
This study develops and evaluates an automated assessment model using Abstract Syntax Trees (AST) with a view to overcoming the limitations of string-matching techniques in the assessment of Fill-in-the-Blank (FIB) programming answers. Traditional string-matching techniques have a relatively high False Negative Rate (FNR) of 21.5% within the context of detecting semantic equivalence. The current model uses semantic structural triangulation to ascertain the semantic similarity of student answers. Technical assessment shows that the AST approach markedly reduces the FNR to 4.5%. The model demonstrates high reliability (ϰ = 0.83) with high classification accuracy (F1 Score = 0.966) which attests to its inferential validity. From a pedagogical perspective, system implementation leads to substantial learning gains, evidenced by a large effect size (Cohen’s d = 1.82) and a high normalized gain (Normalized Gain = 0.90). Multiple regression analysis confirms that semantic accuracy is the primary causal factor driving improved student comprehension. Ontologically, while AST is valid as a partial representation, its limitations—particularly tree isomorphism in recursive structures—highlight the need for further exploration of graph isomorphism approaches. Control Flow Graphs (CFG) and Data Flow Graphs (DFG) offer more expressive relational models for capturing control and data dependencies. The model demonstrates functional feasibility with a System Usability Scale (SUS) score of 76.47. Overall, the AST Triangulation Model is validated as pedagogically effective, inferentially robust, and supportive of evaluative transparency. Future research recommends validating the model on more complex tasks and releasing it as open-source to support reproducibility.
- Research Article
- 10.1177/00368504251396112
- Dec 1, 2025
- Science Progress
- Juan Li + 2 more
Deviation detection has emerged as a critical research focus for business processes, enabling enterprises to prevent fraud, monitor anomalies, and safeguard the security of processes and data, particularly in the medical field. Despite its importance, existing methods face significant limitations. Some approaches focus solely on control flow deviations while neglecting data-induced deviations, whereas others rely on specific data, risking the exposure of personal privacy information. Consequently, a major challenge lies in balancing data availability for deviation detection with the imperative of preserving data privacy and security. To address this challenge, this paper proposes a multi-view deviation detection method based on privacy protection. First, data attributes critical to business processes are extracted using a random field model. Next, an identity and purpose-based data matching algorithm ensures the security of user identities and validates the intended use of data for privacy protection. Furthermore, the business process activity view regulates legally permissible data operations, while decision logic analysis links processes and data through decision tables to detect deviations. Beyond detecting deviations within each perspective, this method uncovers hidden deviations arising from the interplay of business process, data flow, and privacy perspectives. The evaluation using real-world medical event data demonstrates the method's effectiveness. Notably, it outperforms existing approaches by accurately identifying deviations that other methods fail to detect.
- Research Article
- 10.1038/s41556-025-01809-4
- Nov 28, 2025
- Nature cell biology
- Ruofei Li + 19 more
Nuclear condensates (NCs) are membraneless organelles that enable spatial and functional compartmentalization in the nucleus. Yet, the components and functional co-organization of NCs have been poorly studied. Here, we used PhastID to explore the proximal interactome of 18 NCs in HeLa cells. Our data revealed the organizational flow of gene control among these NCs. Crucially, we developed an algorithm to dissect the intricate internal relations of NCs. This algorithm led to key discoveries: the identification of an uncharacterized BUD13 condensate, and the recognition of specific co-organizations between nuclear gems and Cajal bodies for telomerase maturation, and between nuclear gems and histone locus bodies for histone gene pre-mRNA processing. We also created a global reference map to understand NC dynamics under stresses and how disease-related mutations differentially affect NC interactomes. Overall, our work provides a proximal proteome-based atlas for human NCs, substantially advancing our spatiotemporal understanding of nuclear biological events.
- Research Article
- 10.1038/s41467-025-65738-4
- Nov 28, 2025
- Nature Communications
- Nicholas Zolman + 4 more
Deep reinforcement learning (DRL) has shown significant promise for uncovering sophisticated control policies that interact in complex environments, such as stabilizing a tokamak fusion reactor or minimizing the drag force on an object in a fluid flow. However, DRL requires an abundance of training examples and may become prohibitively expensive for many applications. In addition, the reliance on deep neural networks often results in an uninterpretable, black-box policy that may be too computationally expensive to use with certain embedded systems. Recent advances in sparse dictionary learning, such as the sparse identification of nonlinear dynamics (SINDy), have shown promise for creating efficient and interpretable data-driven models in the low-data regime. In this work, we introduce SINDy-RL, a unifying framework for combining SINDy and DRL to create efficient, interpretable, and trustworthy representations of the dynamics model, reward function, and control policy. We demonstrate the effectiveness of our approaches on benchmark control environments and flow control problems, including gust mitigation on a 3D NACA 0012 airfoil at Re = 1000. SINDy-RL achieves comparable performance to modern DRL algorithms using significantly fewer interactions in the environment and results in an interpretable control policy orders of magnitude smaller than a DRL policy.
- Research Article
- 10.4108/eetsis.10390
- Nov 26, 2025
- ICST Transactions on Scalable Information Systems
- Jiehua Lu + 1 more
As an integrated discipline encompassing data mining, machine learning, process modeling and analytics, process mining is increasingly being applied in the field of education and has emerged as a prominent research topic. Traditional business process modeling approaches, which are primarily based on control flow rather than data flow, exhibit a limited capacity to capture a holistic view of critical business data within complex business procedures. This study focuses on the impact of data-driven process modeling techniques on the performance of analytical models and proposes an artifact-centric process mining approach for learning style analysis. Based on the artifact life-cycle model, we extracted sequences of data attribute operations that encapsulate learning style features. The similarity among different data attribute operation sequences was quantified. The proposed method was evaluated using the OULAD, a benchmark dataset in the learning analytics domain. Experimental results demonstrate that the method effectively enhances the performance of learning style prediction models, with SVM and GBoost algorithms outperforming other modeling approaches.
- Research Article
- 10.3390/electronics14224518
- Nov 19, 2025
- Electronics
- Rawan A Taha + 3 more
Cyber-physical power systems integrate sensing, communication, and control, ensuring power system resiliency and security, particularly in clustered networked microgrids. Software-Defined Networking (SDN) provides a suitable foundation by centralizing policy, enforcing traffic isolation, and adopting a deny-by-default policy in which only explicitly authorized flows are admitted. This paper proposes and experimentally validates a cyber-physical architecture that couples three DC microgrids through an SDN backbone to deliver rapid, reliable, and secure power sharing under highly dynamic conditions, including pulsed-load disturbances. The cyber layer comprises four SDN switches that establish dedicated paths for protection messages, supervisory control commands, and high-rate sensor data streams. An OpenFlow controller administers flow-rule priorities, link monitoring, and automatic failover to preserve control command paths during disturbances and communication faults. Resiliency is further assessed by subjecting the network to a deliberate denial-of-service (DoS) attack, where deny-by-default policies prevent unauthorized traffic while maintaining essential control flows. Performance is quantified through packet captures, which include end-to-end delay, jitter, and packet loss percentage, alongside synchronized electrical measurements from high-resolution instrumentation. Results show that SDN-enforced paths, combined with coordinated multi-microgrid control, maintain accurate power sharing. A validated, hardware testbed demonstration substantiates a scalable, co-designed communication-and-control framework for next-generation cyber-physical DC multi-microgrid deployments.