Articles published on Modular Framework
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
1649 Search results
Sort by Recency
- New
- Research Article
- 10.1038/s41467-026-69110-y
- Feb 6, 2026
- Nature communications
- Xinyi Wei + 5 more
The increasing demand for renewable energy integration and scalable power generation highlights the need for efficient and cost-effective solid oxide fuel cell systems. In this study, we present a modular hybrid design framework that enables flexible solid oxide fuel cell scale-up by interconnecting standardized component modules. We introduce a series-parallel configuration that strategically leverages anode and cathode off-gas recirculation to enhance both electrical and thermal efficiency. Through a detailed case study, we demonstrate that the hybrid design achieves 66.3% electrical efficiency while reducing external water use by 59.9% and fresh air demand by 22%, outperforming conventional system designs. We further conducted a techno-economic analysis across four scale-up strategies and found that the hybrid design delivers the lowest levelized cost of electricity at 0.155 $/kWh. Through this work, we have highlighted the critical trade-offs between centralization and decentralization, high- and low-technology readiness level technologies, and economies of scale versus manufacturing capacity. We believe our findings underscore the potential of modular and standardized systems to provide scalable, efficient, and economically viable solutions for future low-carbon energy infrastructures.
- New
- Research Article
- 10.1111/rec.70336
- Feb 5, 2026
- Restoration Ecology
- Irina Cristal + 4 more
Abstract Introduction Altered wildfire regimes, exacerbated by unsustainable management, threaten natural ecosystem recovery post‐fire. Effective restoration requires timely fire impact assessments and tailored, evidence‐based management. While fire databases and Environmental Impact Assessment (EIA) frameworks partially support decision‐making, a holistic platform linking assessment, planning, and operational actions is still lacking. Objectives Our goal was to develop and test a web‐based Post‐Fire Spatial Decision Support System (PF‐SDSS) that facilitates decision‐making across three post‐fire management levels: problem definition, strategic planning, and operational management. Methods PF‐SDSS integrates satellite imagery with high‐resolution cartography in a participatory multi‐criteria analysis (MCA), using server‐ and cloud‐based computing for real‐time analyses. The generated soil erosion risk (SER) and vegetation recovery potential (VRP) maps underpin rule‐based restoration prioritization and recommendations that provide site‐specific practices derived from a comprehensive literature review. Field validation (Spearman's correlation), sensitivity analysis (MCA weight variations), and usability evaluation (System Usability Scale [SUS] method) assessed the system's performance. Results PF‐SDSS is freely available online, with a demonstration for Ávila Province, Spain. Validation showed significant correlations for SER ( ρ = 0.56) and VRP ( ρ = 0.42). Sensitivity analysis confirmed MCA robustness under 20% weight variations, and the 75% SUS score indicated satisfactory usability and acceptance among end‐users. Conclusions This study automated the post‐wildfire management planning cycle within a modular framework. The EIA module supports problem definition by mapping fire impacts. The strategic planning module identifies priority areas and sets site‐specific management objectives. The operational planning module offers spatially oriented, evidence‐based management alternatives.
- New
- Research Article
- 10.46586/uasc.2026.009
- Feb 3, 2026
- Proceedings of the Microarchitecture Security Conference
- Jeremy Boy + 4 more
As quantum computing advances, Post-Quantum Cryptography (PQC) schemes are adopted to replace classical algorithms. Among them is the Stateless Hash-Based Digital Signature Algorithm (SLH-DSA) that was recently standardized by NIST and is favored for its conservative security basis. In this work, we present the first software-only universal forgery attack on SLH-DSA, leveraging Rowhammer-induced bit flips to corrupt the internal state and forge signatures. While prior work targeted embedded systems and required physical access, our attack is software-only, targeting commodity desktop and server hardware, significantly broadening the threat model. We demonstrate full end-to-end attacks against SLH-DSA in OpenSSL 3.5.1, achieving universal forgery for the SHAKE-128f (deterministic), SHA2-128s, and SHAKE-192f (randomized) parameter sets after one hour (deterministic) or eight hours (randomized) of hammering and post-processing ranging from minutes to an hour, and showing theoretical attack complexities for most parameter sets. Our post-processing is informed by a novel complexity analysis that, given a concrete set of faulty signatures, identifies the most promising computational path to pursue. To enable the attack, we introduce Swage, a modular and extensible framework for implementing end-to-end Rowhammer-based fault attacks. Swage abstracts and automates key components of practical Rowhammer attacks. Unlike prior tooling, Swage is untangled from the attacked code, making it reusable and suitable for frictionless analysis of different targets. Our findings highlight that even theoretically sound PQC schemes can fail under real-world conditions, underscoring the need for additional implementation hardening or hardware defenses against Rowhammer.
- New
- Research Article
- 10.1016/j.enzmictec.2025.110776
- Feb 1, 2026
- Enzyme and microbial technology
- Saloni Samant + 3 more
Non-native genetic configuration of Gluconobacter oxydans dehydrogenases drives 2-keto-L-gulonic acid production in recombinant Escherichia coli.
- New
- Research Article
- 10.1016/j.infsof.2025.107973
- Feb 1, 2026
- Information and Software Technology
- Tao Zheng + 5 more
SELink: A semantic-enhanced modular framework for issue–commit link recovery
- New
- Research Article
- 10.62970/ijirct.v12.i1.2601019
- Jan 28, 2026
- International Journal of Innovative Research and Creative Technology
- Sujay Kanungo -
Traffic simulation plays a vital role in the development and testing of internetworking systems, particularly as the demands on network performance and reliability increase. This paper explores the inherent challenges associated with traffic simulation in the context of internetworking, emphasizing the necessity of enabling integration tests to ensure robust system performance. We identify key obstacles such as varying traffic patterns, complexity in modeling real-world scenarios, and the integration of diverse technologies. Furthermore, we discuss the importance of creating a modular simulation framework that allows for adaptive testing environments and effective validation of network protocols. By examining existing methodologies and proposing innovative solutions, this study aims to enhance the efficacy of traffic simulations and facilitate more reliable integration testing processes in networked systems.
- New
- Research Article
- 10.3390/app16031335
- Jan 28, 2026
- Applied Sciences
- Mahmoud Nasr + 3 more
Medical image denoising is crucial for enhancing the diagnostic accuracy of CT and MRI images. This paper presents a modular hybrid framework that combines multiscale decomposition techniques (Empirical Mode Decomposition, Variational Mode Decomposition, Bidimensional EMD, and Multivariate EMD) with curvelet transform thresholding and traditional spatial filters. The methodology was assessed using a phantom dataset containing regulated Rician noise, clinical CT images rebuilt with sharp (B50f) and medium (B46f) kernels, and MRI scans obtained at various GRAPPA acceleration factors. In phantom trials, MEMD–Curvelet attained the highest SSIM (0.964) and PSNR (28.35 dB), while preserving commendable perceptual scores (NIQE approximately 7.55, BRISQUE around 38.8). In CT images, VMD–Curvelet and MEMD–Curvelet consistently outperformed classical filters, achieving SSIM values over 0.95 and PSNR values above 28 dB, even with sharp-kernel reconstructions. In MRI datasets, MEMD–Curvelet and BEMD–Curvelet reduced perceptual distortion, decreasing NIQE by up to 15% and BRISQUE by 20% compared to Gaussian and median filtering. Deep learning baselines validated the framework’s competitiveness: BM3D attained high fidelity but necessitated 6.65 s per slice, while DnCNN delivered equivalent SSIM (0.958) with a diminished runtime of 2.33 s. The results indicate that the proposed framework excels at noise reduction and structure preservation across various imaging settings, surpassing independent filtering and transform-only methods. Its versatility and efficiency underscore its potential for therapeutic integration in situations necessitating high-quality denoising under limited acquisition conditions.
- New
- Research Article
- 10.3390/electronics15030519
- Jan 26, 2026
- Electronics
- Soohwan Lee + 1 more
Deep reinforcement learning (DRL) has been widely adopted to solve decision-making problems in complex environments, demonstrating high performance across various domains. However, DRL-based FPS agents are typically trained with a traditional, monolithic policy that integrates heterogeneous functionalities into a single network. This design hinders policy interpretability and severely limits structural flexibility, since even minor design changes in the action space often necessitate complete retraining of the entire network. These constraints are particularly problematic in game development, where behavioral characteristics are distinct and design updates are frequent. To address these issues, this study proposes a Modular Reinforcement Learning (MRL) framework. Unlike monolithic approaches, this framework decomposes complex agent behaviors into semantically distinct action modules, such as movement and attack, which are optimized in parallel with specialized reward structures. Each module learns a policy specialized for its own behavioral characteristics, and the final agent behavior is obtained by combining the outputs of these modules. This modular design enhances structural flexibility by allowing selective modification and retraining of specific functions, thereby reducing the inefficiency associated with retraining a monolithic policy. Experimental results on the 1-vs-1 training map show that the proposed modular agent achieves a maximum win rate of 83.4% against a traditional monolithic policy agent, demonstrating superior in-game performance. In addition, the retraining time required for modifying specific behaviors is reduced by up to 30%, confirming improved efficiency for development environments that require iterative behavioral updates.
- New
- Research Article
- 10.3390/en19030588
- Jan 23, 2026
- Energies
- Haihang Chen + 2 more
This study investigates the incorporation of a standardised flexibility protocol within a physics-based models to enable controllable demand-side flexibility in residential energy systems. A heating subsystem is developed using MATLAB/Simulink and Simscape, serving as a testbed for protocol-driven control within a Multi-Energy System (MES). A conventional thermostat controller is first established, followed by the implementation of an OpenADR event engine in Stateflow. Simulations conducted under consistent boundary conditions reveal that protocol-enabled control enhances system performance in several respects. It maintains a more stable and pronounced indoor–outdoor temperature differential, thereby improving thermal comfort. It also reduces fuel consumption by curtailing or shifting heat output during demand-response events, while remaining within acceptable comfort limits. Additionally, it improves operational stability by dampening high-frequency fluctuations in mdot_fuel. The resulting co-simulation pipeline offers a modular and reproducible framework for analysing the propagation of grid-level signals to device-level actions. The research contributes a simulation-ready architecture that couples standardised demand-response signalling with a physics-based MES model, alongside quantitative evidence that protocol-compliant actuation can deliver comfort-preserving flexibility in residential heating. The framework is readily extensible to other energy assets, such as cooling systems, electric vehicle charging, and combined heat and power (CHP), and is adaptable to additional protocols, thereby supporting future cross-vector investigations into digitally enabled energy flexibility.
- New
- Research Article
- 10.1038/s41598-026-36310-x
- Jan 21, 2026
- Scientific reports
- Deniz Karanfil + 1 more
The development of accurate digital models (DMs) for physical systems requires virtual representations that faithfully capture the underlying physics of the system or equipment being represented. Physics-based DMs provide reliable predictions only when accurate mathematical models of physical systems exist. When such models are incomplete or uncertain, experimental calibration can significantly improve model fidelity. However, in industries where systems or equipment exist in multiple sizes or configurations, performing experimental calibration for each variant can be prohibitively expensive and time-consuming. To address this challenge, this paper introduces a novel methodology and modular computational framework that leverages machine learning (ML) and dimensional analysis (DA) to enable scaling of DMs. The proposed approach allows calibration to be performed on a single representative system, with results scaled to other system sizes, whether from full-scale to reduced-scale prototypes or vice versa. Traditional applications of DA in this context often encounter difficulties due to distorted scaling factors. This work resolves these challenges by developing a consistent scaling framework tailored for DMs. The methodology is demonstrated by a case study in which a calibrated DM of a wheel loader is scaled between an industrial-size system and a miniaturized laboratory system.
- New
- Research Article
- 10.21203/rs.3.rs-8566379/v1
- Jan 16, 2026
- Research Square
- Shi Lin + 12 more
Tissue clearing has transformed volumetric imaging by improving optical access to thick tissues, yet most existing protocols remain rigid and fail to accommodate the biochemical diversity of different samples. Here, we introduce a question-oriented modular framework that enables flexible assembly of clearing and imaging pipelines tailored to specific biological objectives. By systematically optimizing each module to preserve endogenous fluorescence, antigenicity, and tissue integrity while achieving high transparency and protein retention, we demonstrate its use across diverse cardiac applications—development, infarction and regeneration, as well as immune and vascular mapping—showing that different questions require distinct module assemblies and parameters. Coupled with light-field microscopy (LFM), the workflow efficiently captures submillimeter sections with 30–100-fold smaller datasets and rapid computational reconstruction, enabling high-throughput quantitative volumetric analysis. Together, this modular framework and imaging integration provide a rational and practical foundation for adaptable and interoperable 3D analyses of regenerative, pathological, and comparative systems across vertebrate models.
- Research Article
- 10.1038/s41746-025-02327-1
- Jan 14, 2026
- NPJ digital medicine
- Elias Stenhede + 2 more
Billions of clinical ECGs exist only as paper scans, making them unusable for modern automated diagnostics. We introduce a fully automated, modular framework that converts scanned or photographed ECGs into digital signals, suitable for both clinical and research applications. The framework is validated on 37,191 ECG images with 1596 collected at Akershus University Hospital, where the algorithm obtains a mean signal-to-noise ratio of 19.65 dB on scanned papers with common artifacts. It is further evaluated on the Emory Paper Digitization ECG Dataset, comprising 35,595 images, including images with perspective distortion, wrinkles, and stains. The model improves on the state-of-the-art in all subcategories. The full software is released as open-source, promoting reproducibility and further development. We hope the software will contribute to unlocking retrospective ECG archives and democratize access to AI-driven diagnostics.
- Research Article
- 10.1177/17248035251392741
- Jan 11, 2026
- Intelligenza Artificiale
- Abeer Dyoub + 5 more
How to avoid unethical practices like bias, manipulation, causing harm, and instead building ethical machines has been a topic of investigation and discussion within the Artificial Intelligence (AI) field for more than two decades. If there were clear rules to follow, AI would have long ago demonstrated how to avoid unethical practices. We suggest that such moral norms could perhaps emerge from experience by ethical reasoning about particular situations in different domains, and evolve over time. In this work, we review the field of machine ethics in the last two decades or so, discussing the challenges and outlining possible future directions and potential developments. Building on these insights, we propose a modular framework for deriving practical ethical rules for AI systems from experience, enabling a more transparent and adaptive approach to moral decision making in AI. The potential of the framework is illustrated by means of a couple of case studies in the medical domain.
- Research Article
- 10.1093/nargab/lqaf206
- Jan 10, 2026
- NAR Genomics and Bioinformatics
- Grace Potter + 6 more
Omics Notebook Interactive (OmNI) is an R-based, open-source, and modular framework engineered for streamlined multi-omics data integration and analysis across diverse data types, incorporating interactive visualizations at each processing step. OmNI performs differential expression analysis utilizing customizable linear models, accommodating various covariates and complex experimental designs. For cross-omic layer integration, OmNI employs a modified S-score statistic, ensuring sensitive detection of differential features. The framework also integrates network and metabolomics data, offering detailed insights into regulatory mechanisms through comprehensive enrichment analysis using multiple pathway databases. Outputs include interactive HTML reports, CSV/TSV files, and Cytoscape-compatible objects. OmNI is readily deployable in both local and high-performance computing environments, enabling scalable data processing. Acknowledging the public health concerns of opioids, we performed TMT18-based deep proteome and phosphoproteome analysis of brains from genetically diverse collaborative cross diversity outbred (CC/DO) founder mouse strains exposed to fentanyl to demonstrate OmNI’s capabilities. The integrative S-score uniquely identified differential signaling and interaction hubs conserved across all strains and revealed strain-specific molecular neuro-responses to fentanyl. OmNI is freely available for download at https://github.com/gracerhpotter/OmNI and is also accessible via a web interface at https://emili-laboratory.shinyapps.io/omni/.
- Research Article
- 10.1093/nar/gkaf1472
- Jan 8, 2026
- Nucleic Acids Research
- Tianze Wang + 3 more
Precise modeling of transcriptional regulation is essential for the rational design of genetic circuits in synthetic biology. Current computational approaches for predicting transcriptional activity (ITX) typically lack mechanistic clarity, composability, and scalability, and require extensive training data. Here, we present a modular thermodynamic modeling framework that explicitly parameterizes molecular interactions among promoters, RNA polymerase (RNAP) and transcription factors (TFs). Implemented as the computational platform, T-Pro, this approach provides robust interpretability, scalability, and predictive power. Experimental validation across three distinct bacteria—Escherichia coli, Bacillus subtilis, and Corynebacterium glutamicum—demonstrates substantial improvements (up to 20-fold) in a composite transcriptional performance metric (Fmax*FC), achieved within only three Design–Build–Test–Learn cycles and fewer than five genetic constructs in total. Furthermore, we validate the framework by engineering multispecies bacterial communication circuit, highlighting its broad utility and generalizability. The principles and tools developed here thus enable efficient, rational optimization of transcriptional regulation across diverse prokaryotic hosts.
- Research Article
- 10.3390/jimaging12010037
- Jan 8, 2026
- Journal of Imaging
- Lian Xie + 2 more
Underwater images frequently suffer from color casts, low illumination, and blur due to wavelength-dependent absorption and scattering. We present a practical two-stage, modular, and degradation-aware framework designed for real-time enhancement, prioritizing deployability on edge devices. Stage I employs a lightweight CNN to classify inputs into three dominant degradation classes (color cast, low light, blur) with 91.85% accuracy on an EUVP subset. Stage II applies three scene-specific lightweight enhancement pipelines and fuses their outputs using two alternative learnable modules: a global Linear Fusion and a LiteUNetFusion (spatially adaptive weighting with optional residual correction). Compared to the three single-scene optimizers (average PSNR = 19.0 dB; mean UCIQE ≈ 0.597; mean UIQM ≈ 2.07), the Linear Fusion improves PSNR by +2.6 dB on average and yields roughly +20.7% in UCIQE and +21.0% in UIQM, while maintaining low latency (~90 ms per 640 × 480 frame on an Intel i5-13400F (Intel Corporation, Santa Clara, CA, USA). The LiteUNetFusion further refines results: it raises PSNR by +1.5 dB over the Linear model (23.1 vs. 21.6 dB), brings modest perceptual gains (UCIQE from 0.72 to 0.74, UIQM 2.5 to 2.8) at a runtime of ≈125 ms per 640 × 480 frame, and better preserves local texture and color consistency in mixed-degradation scenes. We release implementation details for reproducibility and discuss limitations (e.g., occasional blur/noise amplification and domain generalization) together with future directions.
- Research Article
- 10.1080/0305215x.2025.2605560
- Jan 8, 2026
- Engineering Optimization
- L F B Carvalho + 1 more
As topology optimization problems generate solutions with complex shapes, additive manufacturing processes could be used to fabricate those structures. Topology optimization problems by density methods were here explored as applied to structures fabricated by the fused deposition modelling method. Those structures show orthotropic behaviour where their stiffness depends on the orientation in which the component is printed. To evaluate the effects of those properties in optimization solutions, problems consisting of structural compliance minimization under material volume and buckling constraints, using the linear buckling approach, were investigated. This optimization model was implemented using the OpenMDAO modular framework, simplifying the integration of the required analysis and possible future extensions. Case studies were investigated considering different printing orientations, showing how the optimization solutions responded to different orthotropic conditions and also critical buckling requirements. Aside from the distinct geometric features present in those solutions, differences in minimum compliance values and eigenvalues separation were highlighted.
- Research Article
- 10.1038/s41598-025-29257-y
- Jan 7, 2026
- Scientific Reports
- Laura Chastagnier + 10 more
The translation of tissue engineering toward clinically relevant large-scale biofabrication requires continuous and non-invasive monitoring of tissue maturation. However, few studies provide an integrated and operational demonstration of how such tracking can be effectively achieved in real bioreactor environments. Here, we propose and experimentally validate a modular analytical framework that integrates physicochemical, metabolic, morphological, and perfusion monitoring strategies designed for centimeter-scale engineered tissues cultivated under perfusion. A custom perfusion bioreactor system was developed for the cultivation of 10 cm3 bioprinted fibroblast tissues, featuring real-time online monitoring of the physicochemical environment—i.e., temperature, pH, and O2 content—thanks to dedicated probes, and metabolic assessment using Raman spectroscopy. Dual-gas PID (Proportional, Integral, Derivative) regulation improved oxygen control accuracy, with deviations reduced from 128% to 22%. Our online Raman probe was implemented to quantify lactic acid secretion as a first proof of concept for monitoring secreted metabolites, with a prediction error of 0.103 g L−1. Additionally, tissue morphological evolution was non-destructively tracked by 7 Tesla MRI. This allowed us to measure, for the first time, the percentage of geometrical fidelity to the biofabrication-designed CAD model during tissue cultivation, which in our case was 87.6%, and to reveal internal tissue remodelling. Nutritive fluid perfusion, mapped either by CFD simulation or real measurements through MRI velocimetry, confirmed heterogeneous flow patterns and internal distribution. Altogether, these results demonstrate that combining established analytical modalities within a unified workflow enables quantitative, real-time characterisation of tissue maturation. This approach bridges the classical bioprocess monitoring with emerging tissue biofabrication workflows, paving the way for adaptive, feedback-driven control of tissue cultivation.Supplementary InformationThe online version contains supplementary material available at 10.1038/s41598-025-29257-y.
- Research Article
- 10.3390/s26010323
- Jan 4, 2026
- Sensors (Basel, Switzerland)
- Alejandro Martinez Guillermo + 3 more
This work presents the development of an Artificial Intelligence (AI)-based pipeline for patient-specific three-dimensional (3D) reconstruction from oncological magnetic resonance imaging (MRI), leveraging image-derived information to enhance the analysis process. These developments were carried out within the framework of Cella Medical Solutions, forming part of a broader initiative to improve and optimize the company’s medical-image processing pipeline. The system integrates automatic MRI sequence classification using a ResNet-based architecture and segmentation of anatomical structures with a modular nnU-Net v2 framework. The classification stage achieved over 90% accuracy and showed improved segmentation performance over prior state-of-the-art pipelines, particularly for contrast-sensitive anatomies such as the hepatic vasculature and pancreas, where dedicated vascular networks showed Dice score differences of approximately 20–22%, and for musculoskeletal structures, where the model outperformed specialized networks in several elements. In terms of computational efficiency, the complete processing of a full MRI case, including sequence classification and segmentation, required approximately four minutes on the target hardware. The integration of sequence-aware information allows for a more comprehensive understanding of MRI signals, leading to more accurate delineations than approaches without such differentiation. From a clinical perspective, the proposed method has the potential to be integrated into surgical planning workflows. The segmentation outputs were converted into a patient-specific 3D model, which was subsequently integrated into Cella’s surgical planner as a proof of concept. This process illustrates the transition from voxel-wise anatomical labels to a fully navigable 3D reconstruction, representing a step toward more robust and personalized AI-driven medical-image analysis workflows that leverage sequence-aware information for enhanced clinical utility.
- Research Article
- 10.1002/jssc.70351
- Jan 1, 2026
- Journal of Separation Science
- Jan Leppert + 2 more
ABSTRACTComprehensive two‐dimensional gas chromatography (GC×GC) offers exceptional separation performance, but method development remains time‐consuming and sensitive to numerous system parameters. In this study, we present a modular simulation framework for GC×GC systems with thermal modulation, implemented in the open‐source Julia package GasChromatographySystems.jl. The simulation is based on a graph‐based abstraction of the GC system and models solute migration through column and modulator modules using previously established retention models. A simplified but effective model for thermal modulation enables the generation of realistic two‐dimensional retention times and peak widths. Simulation results were validated against experimental measurements from a GC×GC‐ToF‐MS system using different modulation periods and temperature programs. Systematic deviations between simulated and measured retention times could be explained and corrected by adjusting parameters such as the actual modulation period and modulator shift. The final model achieved root mean squared error (rmse) of below 15 s (less than 1%) for first‐dimension retention times and 55 ms (8%) for the second dimension. Peak width predictions were less accurate, with deviations of up to 3 s (40%) in the first dimension and up to 40 ms (60%) in the second. This modular and adaptable simulation framework provides a robust foundation for future applications in automated method development and system diagnostics in multidimensional gas chromatography.