Related Topics
Articles published on Design flow
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
4080 Search results
Sort by Recency
- New
- Research Article
- 10.47176/jafm.19.4.3918
- Apr 1, 2026
- Journal of Applied Fluid Mechanics
- N Li + 4 more
This paper presents a novel differential-type high-flow safety valve, with a rated flow and pressure of 3000 L/min and 50 MPa respectively, aimed at enhancing the impact resistance and stability of hydraulic supports. Based on the Ansys Fluent platform, dynamic mesh technology and User-Defined Functions (UDF) were employed to identify the optimal damping hole radius for the differential high-flow safety valve. The transient fluid characteristics throughout the opening process until stable unloading were simulated for valves featuring damping hole radii of 1 mm, 2 mm, and 3 mm. Based on the optimal damping hole radius, test samples were developed high flow safety valves and constructed a rapid dynamic loading shock test rig to evaluate their shock resistance characteristics. Results indicate that a damping hole radius of 2 mm achieves the best overall performance in both transient response characteristics and operational stability. The differential high-flow safety valve demonstrates rated flow and pressure of approximately 2996 L/min and 49.4 MPa respectively, with valve core opening time of under 2 ms, unloading time of under 5 ms, and pressure overshoot of below 20%. These findings validate the structural rationality of the differential high-flow safety valve and confirm its advantages in rapid unloading speed and excellent impact resistance.
- Research Article
- 10.1145/3801552
- Mar 9, 2026
- ACM Transactions on Design Automation of Electronic Systems
- Nanjiang Qu + 3 more
Logic rewriting, as a critical and time-consuming task in synthesis, is widely employed in the integrated circuit (IC) design flow because it has the unique advantages of high optimization and independence from technology. However, existing solutions either employ locks to guarantee safe inter‑node parallelism (at the cost of limiting parallelism), or parallelize the sub‑procedures of rewriting for individual nodes without adequately considering logical sharing (at the cost of inevitably decreasing quality). In this paper, we present DACPara 2.0, a fast, enhanced, and easily extensible parallel framework for high-quality logic rewriting. Our key insight is that, due to their characteristics, different large-scale circuits should adopt different parallel mechanisms to process this task enabling significant improvements in parallelism and scalability. In this spirit, for those with complex logic, we propose a divide-and-conquer parallel approach to exploit intra-graph parallelism (i.e., parallelism among nodes and their sub-procedures within the same And-Inverter Graph (AIG)), which separates three substages and redesigns them using dynamic global information. In this process, the nodes in an AIG are executed bottom-up in a level-wise parallel fashion. On the other hand, in the case of heavily pipelined industrial designs where each pipeline stage is represented as different copies of the same design, we propose a conflict-free sub-AIGs parallel approach featuring an ingenious fanout-based partitioning strategy to exploit inter-graph parallelism (i.e., parallelism between independent sub-AIGs). Experiments show that DACPara 2.0 using 40 CPU physical cores achieves 52.86 × /42.25 × speedup in rewriting/total runtime compared to logic rewriting in ABC, and 3.27 × /2.61 × speedup over the state-of-the-art CPU parallel method, with extremely comparable quality of result. Also, for all large-scale circuits with complex logic, DACPara 2.0 can achieve a 0.4% improvement in quality compared to the state-of-the-art GPU accelerated method.
- Research Article
- 10.3390/app16052605
- Mar 9, 2026
- Applied Sciences
- Dongjing Chen + 6 more
Gas–liquid cyclone separators are an efficient and emerging method for air removal in hydraulic systems, yet often suffer from excessive pressure loss. A novel contracting inlet guiding structure is proposed to minimize hydraulic losses. This study adopts a comprehensive methodology combining theoretical modeling, computational fluid dynamics (CFD) using the Reynolds Stress Model (RSM), and experimental validation. A theoretical pressure-loss model incorporating the diminishing-returns effect of the contraction angle was established. Simulations revealed that increasing the contraction angle reduces energy dissipation by improving the uniformity of the tangential-velocity field. Based on the balance between pressure-loss reduction and degassing potential, a contraction angle of 11° was identified as the optimal design and experimental tests on a prototype confirmed the validity of the numerical model. The results demonstrate that, compared to the conventional straight tangential inlet, the optimized inlet reduces the pressure loss by approximately 30% under rated conditions. The experimental–numerical discrepancy decreases significantly with flow rate, achieving a relative error of approximate 10% at the design flow rate. These findings provide a theoretical basis and practical guidance for the low-energy design of hydraulic cyclone separators.
- Research Article
- 10.3390/electronics15051048
- Mar 2, 2026
- Electronics
- Emilio Isaac Baungarten-Leon
This article aims to synthesize the current ecosystem of open-source tools for Integrated Circuit (IC) design, covering the entire digital design flow from Register-Transfer Level (RTL) description to fabricable layouts. The survey categorizes and analyzes tools across major stages of design, including code-generation tools, logic synthesis, simulation, and physical design flow. Special emphasis is given to the fabricable open-source Process Design Kit (PDK), which enables the physical realization of open-hardware projects. By examining interoperability, limitations, and maturity across this toolchain, the article provides a comprehensive overview of the Electronic Design Automation (EDA) landscape and identifies the research and educational opportunities that arise from democratizing silicon design through open and reproducible workflows.
- Research Article
- 10.1016/j.fuel.2025.137447
- Mar 1, 2026
- Fuel
- Wenshuai Xing + 5 more
Design and transient flow analysis of a new rotor profile for hydrogen circulating pumps
- Research Article
- 10.1109/tcad.2025.3597237
- Mar 1, 2026
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
- Max Uhlmann + 10 more
Neural hardware accelerators have demonstrated notable energy efficiency in tackling tasks, which can be adapted to artificial neural network (ANN) structures. Research is currently directed toward leveraging resistive random-access memories (RRAMs) among various memristive devices. In conjunction with complementary metal-oxide semiconductor (CMOS) technologies within integrated circuits (ICs), RRAM devices are used to build such neural accelerators. In this study, we present a neural accelerator hardware design and verification flow, which uses a lookup table (LUT)-based Verilog-A model of IHP’s one-transistor-one-RRAM (1T1R) cell. In particular, we address the challenges of interfacing between abstract ANN simulations and circuit analysis by including a tailored Python wrapper into the design process for resistive neural hardware accelerators. To demonstrate our concept, the efficacy of the proposed design flow, we evaluate an ANN for the MNIST handwritten digit recognition task, as well as for the CIFAR-10 image recognition task, with the last layer verified through circuit simulation. Additionally, we implement different versions of a 1T1R model, based on quasi-static measurement data, providing insights on the effect of conductance level spacing and device-to-device variability. The circuit simulations tackle both schematic and physical layout assessment. The resulting recognition accuracies exhibit significant differences between the purely application-level PyTorch simulation and our proposed design flow, highlighting the relevance of circuit-level validation for the design of neural hardware accelerators.
- Research Article
- 10.3390/aerospace13020174
- Feb 12, 2026
- Aerospace
- Mark A Miller + 2 more
Systems with large physical size such as wind turbines, aircraft, and ships are dominated by the inertia of the flow. In conventional experimental facilities, a reduction in scale is required, which can introduce viscous effects that are not present at full size. However, if the wind tunnel is operated with a heavy gas, the reduction in scale can be counteracted by an increase in density, and the flow that exists at full size can be recreated accurately. This work describes the design, construction, and basic flow characterization of a heavy gas wind tunnel facility, known as the Compressed Air Wind Tunnel (CAWT), that utilizes pressurized air as the working fluid at pressures up to 35 bar. The tunnel was designed to accommodate relatively large models inside the 1.04 meter-diameter test section while having improved optical access compared to existing facilities of this type. A series of flow characterization tasks were carried out on the completed facility, including quantifying the turbulence intensity and flow uniformity in the tunnel test section. Measurements showed a maximum turbulence intensity of 0.46% and an average of 0.22% across all conditions and locations tested. The maximum velocity non-uniformity between four locations in the test section was 0.36%, which occurred at the lowest tested wind speed of 2.4 m/s. The average non-uniformity across all tested conditions was less than 0.093%. Mapping the facility operating space has now enabled ongoing work examining rotorcraft, marine propeller, and wind turbine performance and wake development with the aim of answering long-standing questions regarding how the fluid dynamics depend on scale or Reynolds number effects.
- Research Article
- 10.1145/3796529
- Feb 11, 2026
- ACM Transactions on Design Automation of Electronic Systems
- Fuping Li + 12 more
With the slowdown of Moore’s Law, conventional monolithic chip architectures face challenges such as excessive die sizes and prohibitive manufacturing costs. Consequently, chiplets have emerged as a pivotal technology in the post-Moore era, attracting significant attention from both academia and industry. Multi-chiplet systems offer compelling advantages over monolithic ones, including enhanced integration density, reduced cost, and shortened time-to-market. However, realizing these benefits necessitates design flows capable of optimizing parameters across logical, physical, and circuit layers, which introduces substantial design complexity. Numerous design automation technologies have been proposed to address these challenges. This paper provides a comprehensive overview of related advancements, categorizing chiplet design methodologies into two primary types: i) top-down flows disintegrating existing hardware designs into chiplets and subsequently reintegrating them into multi-chiplet systems, and ii) bottom-up flows combining existing chiplets into multi-chiplet systems based on user applications. This paper begins by introducing foundational concepts, technical characteristics, and evaluation models relevant to multi-chiplet systems. We then systematically summarize the problem formulations, design spaces, and optimization techniques associated with top-down and bottom-up design flows. Finally, we discuss key challenges and potential future research directions in chiplet design automation, aimed at further harnessing the potential of chiplet-based integration.
- Research Article
- 10.1145/3785334
- Feb 4, 2026
- ACM Computing Surveys
- Benjamin Carrion Schafer + 1 more
Approximate Computing in hardware design has emerged as an alternative way to further reduce the power consumption of integrated circuits (ICs) by trading off errors at the output with simpler, and more efficient logic. So far, the main approaches in approximate computing have been to simplify the hardware circuit by applying different approximation primitives of different aggressiveness to the original hardware description until the maximum error threshold is met. Multiple of these primitives can also be combined to obtain better results. These primitives are often applied at different VLSI design stages to maximize their effect. Because of the importance of this topic, there exists a very large body of work and multiple surveys have tried to cover all of it. In this work, we take a different approach and concentrate only on approximation computing techniques applied at the High-Level Synthesis (HLS) stage of the VLSI design flow. The reason for this is that approximations applied at the highest possible level of VLSI design abstraction also have the highest impact on the resultant circuit. Moreover, HLS is finally being widely embraced by hardware designers, and this work aims at presenting practical examples of how the different approximation primitives can be easily applied using commercial HLS tools. We finally present some typical pitfalls that designers should avoid when using approximate computing and point to some future direction in this area.
- Research Article
- 10.1016/j.indic.2025.101092
- Feb 1, 2026
- Environmental and Sustainability Indicators
- Joseph Holway + 2 more
Lotic environments support the food security of hundreds of millions globally, yet tradeoffs among freshwater-dependent food systems remain poorly understood. In the Lower Mekong Basin, where rice and fish production systems are highly heterogeneous, we modeled harvest outcomes using multivariate autoregressive state-space (MARSS) models and flood magnitude as a key driver, measured by the High Seasonal Amplitude Metric (HSAM). High HSAM values (0.5) positively affected floodplain (FP) fish catch and Ca` Mau rice harvests, while moderate val- ues (0.1) had negative effects. In contrast, Dai fish catch and Cambodian rice harvest responded positively to moderate HSAM values but negatively to high values. Based on these patterns, we engineered four flow regimes optimized for each system. Forecasts over 10 years showed that each engineered hydrograph increased harvest for its target system. Some tradeoffs emerged: the FP hydrograph boosted FP fish catch and Ca` Mau rice but reduced Dai fish and Cambodian rice; the Cambodian rice hydrograph showed the reverse. Alternating between high and mod- erate HSAM values mitigated risk to individual systems, improving FP fish catch while having mixed or neutral effects elsewhere. Setting HSAM at 0.19 stabilized production across all four systems, balancing tradeoffs and maintaining current yields. These results highlight the potential to deliberately manage hydrologic regimes to co-optimize food production systems. Expanding hydrologic objectives beyond power generation is essential for sustaining ecosystem services, maintaining regional food security, and staying within planetary boundaries. • Lotic environments provide food security to millions worldwide • Productivity of freshwater ecosystems is globally threatened • MARSS modeling results show promise for resource co-optimization • Science advised freshwater resource management creates sustainable futures
- Research Article
- 10.1063/5.0306649
- Feb 1, 2026
- Physics of Fluids
- Sumei Li + 4 more
Axial flow pumps, widely used for drainage and irrigation, face erosion risks in sediment-laden rivers due to particle–fluid interactions with flow components. This study applies the discrete phase model and the Tabakoff erosion model to evaluate erosion and its impact on hydraulic performance. Numerical simulations show that impeller blades suffer the most severe erosion, with an intensity about 5–6 times greater than that of the guide vanes. Under sediment-laden conditions, both head and efficiency decrease compared with clear water, with reductions of 5.26% and 8.73% at the design flow rate. Erosion patterns vary with flow rate: at low flow rates, erosion concentrates on the pressure side, but shifts to the suction side at 1.3 Qd. Larger particles cause a rapid decline in head and efficiency, with performance stabilization at particle sizes over 0.5 mm. Increasing sediment concentration from 0.01 to 0.25 further decreases head and efficiency by 4.67% and 15.28%, respectively. Both particle size and concentration linearly intensify erosion, expanding the affected area and causing flow nonuniformity near the shroud. The findings provide a theoretical basis for the erosion-resistant design of axial flow pumps, offering key insights into enhance their lifespan and performance in sediment-laden environments.
- Research Article
- 10.1002/inst.70032
- Feb 1, 2026
- INSIGHT
- Domenik Helms + 2 more
ABSTRACT Automotive software is undergoing a rapid change toward artificial intelligence and towards more and more connectedness with other systems. For both, an incremental design paradigm is desired, where the car's software is frequently updated after production but still can guarantee the highest automotive safety standards. We present a design flow and tool framework enabling a DevOps paradigm for automotive software development. DevOps means that software is developed in a continuous loop of development, deployment, usage in the field, collection of runtime data and feedback to the developers for the next design iteration. The software developers get support in defining, developing, and verifying new software functions based on the data gathered in the field by the previous software generation. The software developers can define contracts describing the time and resource assumptions on the integration environment and guarantees for other dependent software components in the system. These contracts allow a composition of software components and proof obligations to be discharged at design time through virtual integration testing and runtime through continuous monitoring of assumptions and guarantees on the software component's interfaces. An update package, consisting of the software component and its contracts, is then automatically created, transferred over the air, and deployed in the car. Monitors derived from the contracts allow for supervising the system's behavior, detecting failures at runtime, and annotating the situation to be included in a data collection, fueling the next design iteration.
- Research Article
- 10.1145/3795509
- Jan 29, 2026
- ACM Transactions on Design Automation of Electronic Systems
- Rijoy Mukherjee + 1 more
High-Level Synthesis (HLS)-based VLSI design flows allow designers to start from high-level specifications in a general-purpose programming language like C/C++ and automatically generate an optimized hardware design in a hardware description language (HDL) like Verilog or VHDL. Supply of compromised computer-aided design (CAD) tools by an electronic design automation (EDA) vendor to the chip designers is a great threat that adversely affects the horizontal semiconductor business model. Recent works have examined the potential for security issues induced by a compromised HLS CAD tool and demonstrated how HLS is a prime candidate for hardware Trojan (HT) insertion into any underlying design since it is hard to correlate the high-level description to the generated register-transfer level (RTL) code. Further, it has been shown that the use of compiler-generated intermediate representation (IR) as a likely attack vector for inserting HT in the RTL during an HLS-based IC design flow, taking advantage of the lack of automated methods to analyse logic inside a complex LLVM IR. In this work, we propose, implement, and evaluate a novel HLS security verification framework by leveraging the modern large language models (LLMs). Specifically, we focus on detecting the HTs introduced using Black-Hat HLS and HLS-IRT toolchain by performing functional verification using LLM. The experimental results show that LLMs have an impressive ability to analyse and automatically identify these hardware security anomalies.
- Research Article
- 10.1145/3795530
- Jan 29, 2026
- ACM Transactions on Design Automation of Electronic Systems
- Xiaotian Zhao + 5 more
Physical dataflow, which defines the detailed connections among cells and macros, is a critical yet underexplored factor in automatic macro placement. It becomes increasingly important for enabling intelligent design automation to minimize manual intervention and reduce design iterations. Existing macro or mixed-size placers with dataflow awareness primarily focus on intrinsic relationships among macros, overlooking the crucial influence of standard cell clusters on macro placement. To address this, we propose DARE, which extracts hidden connections between macros and standard cells and incorporates a series of algorithms to enrich dataflow awareness, integrating them into placement constraints for improved macro placement. To further optimize placement results, we introduce two fine-tuning steps: (1) congestion optimization by taking macro area into consideration, and (2) flipping decisions to determine the optimal macro orientation based on the extracted dataflow information. By integrating enhanced dataflow awareness into placement constraints and applying these fine-tuning steps, the proposed approach achieves an average 7.9% improvement in half-perimeter wirelength (HPWL) across multiple widely used benchmark designs compared to a state-of-the-art dataflow-aware macro placer. Additionally, it significantly improves congestion, reducing overflow by an average of 82.5%, and achieves improvements of 36.97% in Worst Negative Slack (WNS) and 59.44% in Total Negative Slack (TNS). The approach also maintains efficient runtime throughout the entire placement, incurring less than a 1.5% runtime overhead. These results show that the proposed dataflow-driven methodology, combined with the fine-tuning steps, provides an effective foundation for macro placement within the OpenROAD flow and can be further extended to other design flows in the future to enhance placement quality.
- Research Article
- 10.1038/s41598-026-36822-6
- Jan 27, 2026
- Scientific reports
- Ahmed Ramadhan Al-Obaidi + 1 more
Pumps operated in turbine mode have attracted considerable attention for hydropower generation and water conveyance applications due to their economic advantage over conventional hydroturbines. Despite this benefit, their deployment remains constrained by limited flow controllability and pronounced instability when operating away from the design point. To address these challenges, the present work combines experimental measurements with numerical simulations to examine the unsteady flow behavior of an axial-flow pump under five distinct operating regimes, spanning deep part-load conditions at 5 L/min through the design point and into overload operation at 12.5 L/min. Pump stability was evaluated through detailed analyses of velocity distributions and pressure fluctuations in both the time and frequency domains. The results reveal a strong dependence of unsteady behavior on operating condition. At part-load operation, pressure pulsations intensify markedly, with peak-to-peak amplitudes increasing by as much as 15% relative to the design flow rate. Spectral analysis shows that rotor-stator interaction phenomena dominate the unsteady response, with the blade passing frequency and its harmonics contributing over 12% of the total spectral energy across most monitoring locations. As the flow rate approaches overload, the magnitude of pressure oscillations is reduced by approximately 14%, indicating a progressive improvement in hydraulic stability. The effect of impeller blade stagger was further investigated for three configurations, namely - 3°, 0°, and + 3°. Deviations from the baseline geometry (0°) significantly amplify flow unsteadiness, particularly in the rotor-stator interaction region. In these cases, pressure pulsation amplitudes increase by up to 16%, highlighting the sensitivity of unsteady flow structures to blade-angle modification. Overall, the findings demonstrate that both operating regime and impeller blade angle exert a decisive influence on the stability and dynamic performance of axial-flow pumps, offering valuable insights for their optimal design and operation under variable flow conditions.
- Research Article
- 10.3390/app16031208
- Jan 24, 2026
- Applied Sciences
- Jianyi Wu + 4 more
In fields such as rock and soil grouting and petroleum extraction, the flow of water driven by an immiscible fluid (or vice versa) within a porous medium is frequently encountered. Due to the presence of an interface between the two fluids, whose position changes over time and needs to be solved concurrently with the fluid pressure field, this issue represents a special two-phase moving boundary problem. In this paper, fundamental governing equations for this moving boundary problem in one-dimensional Cartesian, cylindrical, and spherical coordinate systems are developed. Analytical solutions for the pore pressure distribution and interface movement are obtained through the method of similarity transformation. By disregarding the pressure variation in the original underground water, this two-phase moving boundary problem can be reduced into a one-phase moving boundary problem. Consequently, analytical solutions for this one-phase problem are also obtained. The analytical solutions mainly address specific boundary conditions. For cases with general boundary conditions, numerical solutions are provided through a combination of finite volume method and moving node approach. By assuming the instantaneous establishment of a steady-state pore pressure distribution within the medium, the transient two-phase flow model is transformed into a quasi-steady model. Subsequently, an approximate solution for the quasi-steady model is also established. After verifying the model solutions, computational examples are presented to evaluate the effectiveness of the one-phase approximation and the quasi-steady approximation. The one-phase model tends to underestimate fluid pressure within the porous medium under pressure boundary conditions, thereby overestimating the movement speed of the two-phase interface. Additionally, under flow rate boundary conditions, the one-phase model tends to underestimate the pressure required to achieve the design flow rate. As the stiffness of the porous medium increases, the influence of the pressure variation rate term in the transient model equations gradually diminishes. Consequently, the interface movement and pore pressure distribution obtained from the quasi-steady solutions are essentially consistent with those obtained from the transient model, and the quasi-steady solutions are convenient to apply under these circumstances.
- Research Article
- 10.1038/s41467-026-68672-1
- Jan 24, 2026
- Nature communications
- Songkai Liu + 6 more
Conceptual engineering system design faces challenges from traditional methods and emerging AI tools to fully address its inherently complex, dynamic, and creativity-driven demands. iDesignGPT is a framework that integrates large language models with established design methodologies to enable dynamic multi-agent collaboration for problem refinement, information gathering, design space exploration, and evaluation. By incorporating design metrics such as coverage, diversity, and novelty, iDesignGPT provides quantitative insights for early-stage conceptual design. Performance evaluations across six public design challenges show that iDesignGPT achieves competitive novelty and consistently higher originality and modularity than GPT-4o zero-shot, GPT-4o chain-of-thought and Deepseek-r1, based on metrics and expert assessments. Two controlled user studies show positive reception across profiles and, for novice designers, lower mental demand than human-only design and clearer design flow with iDesignGPT. These results establish iDesignGPT as a practical framework for integrating language-model agents with established engineering design methods, enabling metrics-driven support for conceptual design by both expert and novice designers.
- Research Article
- 10.1038/s41598-026-35329-4
- Jan 23, 2026
- Scientific Reports
- Mohammed Jameel + 5 more
This paper presents a robust multi-objective optimization approach—the multi-objective starfish optimization algorithm (MOSFOA)—designed to address complex challenges in engineering design and optimal power flow analysis. As an advanced extension of the starfish optimization algorithm (SFOA), MOSFOA leverages biological inspiration from starfish behaviors such as exploration, predation, and regeneration to balance global exploration and local exploitation. The proposed MOSFOA employs elitist non-dominated sorting (NDS) and crowding distance (CD) mechanisms to preserve solution diversity and guide convergence toward the Pareto-optimal front. The effectiveness of MOSFOA is validated on standard ZDT and DTLZ benchmark suites and further demonstrated on real-world applications, including engineering design tasks and the IEEE 30-bus power system. Performance comparisons with ten state-of-the-art multi-objective algorithms, using metrics such as inverted generational distance (IGD) and hypervolume (HV), confirm the strength of MOSFOA in achieving a well-balanced trade-off between convergence and diversity. Additionally, the KKT proximity metric (KKTPM) is employed to assess convergence. The results demonstrate that MOSFOA significantly outperforms its counterparts in terms of both IGD and HV, achieving superior convergence and diversity performance. These findings underscore MOSFOA’s robustness, scalability, and stability across runs. Moreover, its strong performance in handling constrained engineering problems highlights its practical potential for real-world decision-making and optimization tasks in power systems and complex design optimization, making MOSFOA a promising tool for both theoretical research and industrial applications. Source code of MOSFOA are publicly available at https://www.mathworks.com/matlabcentral/fileexchange/183090-mosfoa-multi-objective-starfish-optimization-algorithm.
- Research Article
- 10.1145/3779423
- Jan 19, 2026
- ACM Transactions on Design Automation of Electronic Systems
- Hao-Hsiang Hsiao + 3 more
Traditional Design Space Exploration (DSE) methods in Physical Design (PD), such as Bayesian Optimization (BO) and Ant Colony Optimization (ACO), as well as state-of-the-art commercial tools like Synopsys DSO.ai, typically treat the design flow as a black box, lacking insight into the underlying designs. This hinders their ability to generalize across unseen designs. In this paper, we introduce FastTuner, an innovative Reinforcement Learning (RL) agent that leverages Graph Neural Networks (GNNs) and Transformers to understand the underlying designs and enable rapid DSE on unseen designs across various PD stages. Our approach incorporates an attention-based framework for autoregressive and conditional parameter tuning and introduces a power, performance and area (PPA) estimator to predict end-of-flow PPA metrics, significantly accelerating RL reward computation. Extensive evaluations on seven industrial designs using the TSMC 28nm technology node demonstrate that FastTuner significantly outperforms existing state-of-the-art DSE techniques in both optimization quality and runtime, achieving improvements of up to 79.38% in Total Negative Slack (TNS), 12.22% in total power, and more than 50x reduction in runtime.
- Research Article
- 10.3390/chips5010002
- Jan 13, 2026
- Chips
- Clayton R Farias + 2 more
Semiconductor technologies are susceptible to radiation effects. The particle incidence in susceptible areas of an integrated circuit (IC) can generate physical interactions capable of producing errors. This paper predicts the IC cross sections for Single Event Effects. The cross section is a metric that provides an IC’s susceptibility to radiation. It deals with particle source interaction and physical design volumes. This work evaluates the IC cross section, exploring the physical design characteristics of susceptible regions in logic gates. It explores particles with low LET, identifying the charge collection areas. Also, the heavy ions are used to evaluate the critical cross section range. Distinct benchmark circuits were simulated to characterize sensitivity trends. The influence of circuit input conditions along with cells’ susceptibility reveals significant findings. The results indicate a difference up to ten times between low- and high-energy particles. Consequently, predicting the IC cross section at an early stage of the design flow is essential, especially for electronics devices used in radiation environments.