- New
- Research Article
- 10.3390/computation14040085
- Apr 3, 2026
- Computation
- Gowrava Shenoy Beloor + 9 more
The use of computational fluid dynamics (CFD) to study hemodynamics in arteries offers significant potential for addressing complex flow problems. Due to its enhanced performance hardware and software, CFD has become an important approach for studying hemodynamics in human arteries. This approach is utilized to investigate hemodynamics and forecast risk factors for atherosclerotic lesion development and progression, including circulatory flow, and to analyze local flow fields and flow profiles resulting from geometric changes. This foundational study will aid in analyzing blood flow behavior through the abdominal aorta and the origin and courses of renal arteries, as well as investigating the causes of disorders such as atherosclerosis and hypertension. The current study investigates three idealized abdominal aorta–renal artery junction models under varying blood pressure settings. Materialise software V19 was used to extract the geometry data to create idealized 3D abdominal aorta–renal branching models. Unsteady flow simulations were performed in ANSYS Fluent, utilizing rigid walls and Newtonian and Carreau–Yasuda viscosity conditions. Oscillatory shear index (OSI) and Time-averaged wall shear stress (TAWSS) were measured to enhance understanding of atherosclerotic plaque formation and progression. Also, the effect of geometric change at the bifurcation area was explored, and it was discovered that this location causes considerable vortex forming zones. The evident velocity reduction and backflow development were seen, reducing shear stress. The findings indicate that low TAWSS < 0.4 Pa and OSI > 0.15 areas within the bifurcation region are more susceptible to atherosclerosis development.
- New
- Research Article
- 10.3390/computation14040079
- Mar 31, 2026
- Computation
- Myriam E Bruno + 2 more
Rayleigh–Bénard convection (RBC) provides a benchmark for studying buoyancy-driven instabilities and heat transport in confined fluids. Heat transfer scaling in cylindrical geometries is well established, whereas the role of the anisotropy induced by the domain geometry, such as elliptical shapes, has not fully explored. This study presents direct numerical simulations of RBC in two domains of equal height, H=0.0124 m, and different cross-sections: a circular cylinder with radius R=3.11×10−3 m and an elliptical cylinder with semi-axes equal to Rmax=3.11×10−3 m, Rmin=1.55×10−3 m, respectively. The simulations, performed at Rayleigh number Ra=2×106 and Prandtl number Pr=1.68 (for water) under the Boussinesq approximation, reveal that (i) the average Nusselt number is comparable in both cases (⟨Nu⟩≈38.23 for the circular case and ⟨Nu⟩≈39.22 for the elliptical one) and (ii) the different domain geometries influence the thermal transport mechanism and flow organization. Specifically, in the cylindrical cell, heat transfer is regulated by a large-scale circulation roll, whereas in the case of the elliptical shape, the domain is populated by thermal plumes driving the convective dynamics. The latter phenomenon is evidenced by larger Nusselt number fluctuations at the lower and upper plates, with a standard deviation increasing from σ≈2.21 in the circular cylinder to σ≈4.57 in the elliptical domain. These results highlight that the geometric anisotropy modifies the coupling between boundary layers and the core flow dynamics, leading to enhanced intermittency without affecting the magnitude of the heat flux. Therefore, the elliptical domain is suitable for applications characterized by enhanced mixing.
- New
- Research Article
- 10.3390/computation14040076
- Mar 25, 2026
- Computation
- Marta Torres-Polo + 1 more
Construction and demolition waste (CDW) represents a critical environmental challenge in the building sector, with global generation exceeding 3.57 billion tonnes annually. The circular economy (CE) framework offers a transformative pathway through selective deconstruction and material recovery, yet implementation faces significant barriers including information asymmetry, supply chain fragmentation, and regulatory uncertainty. This study conducts a systematic literature review using the Context–Mechanism–Outcome (CMO) framework to analyze how computational methods, specifically Digital Twins (DT), Building Information Modeling (BIM), Internet of Things (IoT), blockchain, artificial intelligence, and robotics, act as enablers for resilience in CDW management. Following PRISMA 2020 guidelines and realist synthesis principles, we analyzed 42 high-quality empirical studies from Web of Science and Scopus (2015–2025). Our analysis identifies seven primary mechanisms: traceability (M1), simulation (M2), classification (M3), tracking (M4), collaboration (M5), analytics (M6) and robotics (M7). These mechanisms interact with four critical contexts (information asymmetry, supply chain fragmentation, economic uncertainty, operational risks) to generate outcomes at two levels: resilience capabilities (visibility, monitoring, collaboration, flexibility, anticipation) and performance indicators (recovery rates, cost reduction, CO2 emissions mitigation, occupational safety). Key findings from the CMO analysis reveal that blockchain-enabled traceability increases material recovery rates by 15–25%, DT simulation reduces deconstruction costs by 20–30%, and computer vision automation improves sorting accuracy to 85–95%. The study contributes middle-range theories explaining how digital technologies enable circular transitions under specific contextual conditions, offering actionable strategic implications for researchers, project managers, technology developers, and policymakers committed to advancing computational economics in sustainable construction.
- New
- Research Article
- 10.3390/computation14030074
- Mar 20, 2026
- Computation
- Andrei I Malinouski + 2 more
Calculation of heat transfer in granular materials is an important task for many applications, from thermal management in electronics to exploring celestial soils. Usually, an effective thermal-conductivity model is employed to predict heat flux in unstructured granular media, such as a packed bed. However, a more advanced approach, the discrete element method (DEM), can capture the complex effects of mechanical loading and material mixtures on thermal transport coefficients, which traditional models struggle with. Pivotal for this approach is knowing the heat transfer coefficient between two adjacent particles. Currently, in most DEM-capable software, only particles in direct surface contact are considered to have non-zero heat conduction. We propose considering particles that are close to each other but don’t have a contact area with a non-zero surface area. We perform numerical modeling of the conductive heat transfer coefficient between equal spherical particles separated by media, assuming the fluid’s thermal conductivity is at least an order of magnitude lower. We use numerical solutions of differential equations to account for both thermal resistance within particles and through the gap between them. We found a simple generalized correlation for the heat transfer coefficient between particles and a general formula for the angular distribution of heat flux density across the particle surface. By employing a non-dimensional approach, the obtained formulas are constructed using non-dimensional parameters: the ratio of the particle’s thermal conductivity to that of the medium, and the ratio of the gap width between particles to their radius. The resulting formula is simple and convenient for DEM heat transfer calculations in packed and fluidized beds.
- New
- Research Article
- 10.3390/computation14030072
- Mar 19, 2026
- Computation
- Pedro Aguilar-Encarnacion + 3 more
The management of municipal solid waste in intermediate cities exhibits high daily variability and source heterogeneity, which hinders operational sizing and material recovery. Reliable predictions are required from heterogeneous and often-scarce data. However, studies that compare multiple machine learning algorithms with temporal validation on short time series in intermediate cities are still limited. This study compares fourteen machine learning algorithms to predict the daily generation of organic and inorganic waste in La Joya de los Sachas, Ecuador, formulating the problem as a multi-output regression problem. An adapted CRISP-DM design was employed, using primary data from a waste characterization campaign, temporal feature engineering, variable encoding, and an expanding-window backtesting protocol against lag-7 persistence and ARIMA. Tree-based ensembles achieved the best performance. AdaBoost provided the best organic forecasts (R2=0.985, RMSE =0.081, MAE=0.061 in rate space), while Random Forest was best for inorganic (R2=0.965, RMSE =0.049, MAE=0.040). Linear models were stable but slightly inferior, and other approaches (SVR, KNN, MLP, Lasso, ElasticNet) showed lower generalization capacity. The study provides a multi-output regression protocol with temporal validation for municipal contexts with short time series, comparative evidence across fourteen algorithms, and a conversion from rates to kilograms for operational use.
- New
- Research Article
- 10.3390/computation14030070
- Mar 15, 2026
- Computation
- Roaa Soloh + 2 more
Precise segmentation of brain tumors from multimodal MRI scans is essential for accurate neuro-oncological diagnosis and treatment planning. To address this challenge, we propose a label-free optimization-driven segmentation framework based on the α-expansion graph cut algorithm, offering improved computational efficiency and interpretability compared to deep learning alternatives. The method relies on structured optimization and handcrafted features, including local intensity patches, entropy-based texture descriptors, and statistical moments, to compute voxel-wise unary potentials via gradient-boosted decision trees (XGBoost). These are integrated with spatially adaptive pairwise terms within a graph model optimized through α-expansion. Evaluation on 146 BraTS validation volumes demonstrates reliable whole-tumor overlap, with a mean Dice score of 0.855 ± 0.184 and a 95% Hausdorff distance of 18.66 mm. Bootstrap analysis confirms the statistical stability of these results. The low computational overhead and modular design make the method particularly suitable for transparent and resource-constrained clinical deployment scenarios.
- Research Article
- 10.3390/computation14030064
- Mar 3, 2026
- Computation
- Raigul Tuleuova + 7 more
Linear integrators are traditionally used in motion control systems to compensate for static effects and suppress low-frequency disturbances. However, their use is inevitably accompanied by phase delays that limit the performance and robustness of control systems, especially in conditions of parametric uncertainty. In this regard, nonlinear integrators have been considered for several decades as a promising alternative that can weaken phase constraints and improve the quality of transients. In this paper, the concept of nonlinear integrators is reinterpreted in the context of self-organizing motion control of precision stages. In contrast to traditional approaches focused primarily on frequency analysis and the method of describing the function, a method is proposed for the synthesis of a self-organizing control system for nonlinear SISO objects based on catastrophe theory, namely in the class of elliptical dynamics with the property of structural stability. The control action is formed in such a way that transitions between stable modes occur due to bifurcation-conditioned self-organization, without using external switching logic. To ensure strict analytical guarantees of stability, the Lyapunov gradient-velocity vector function method is used, which guarantees aperiodic robust stability, suppression of oscillatory and chaotic modes, as well as monotonic convergence of trajectories under conditions of parameter uncertainty. The parameters of the nonlinear integrator are adapted using Self-Organizing Maps (SOM), while any parameter changes are allowed only within the regions that meet the conditions of Lyapunov stability. This approach ensures the alignment of analytical and data-oriented methods without violating the structural stability of the system. The results of numerical experiments demonstrate the superiority of the proposed method in comparison with classical linear and adaptive regulators in problems of controlling the movement of stages, especially near bifurcation boundaries and with significant parametric uncertainty. The results obtained confirm that the integration of nonlinear integrators with catastrophe theory and self-organization mechanisms forms a promising basis for the creation of robust and high-precision motion control systems of a new generation.
- Research Article
- 10.3390/computation14030057
- Mar 2, 2026
- Computation
- Sara Nassar + 2 more
Projection-based variants of optimal transport, such as the Sliced Wasserstein (SW) and its extensions, have become popular alternatives to classical Wasserstein distances due to their scalability and analytical tractability. However, most of these methods rely on independently sampled random projections, which often fail to capture semantically meaningful directions, leading to inefficiencies and limited expressiveness, especially in high-dimensional settings. In this work, we propose the Hybrid Merging Projection Wasserstein (HW) distance, a novel and efficient alternative that addresses these limitations by combining data-driven and random projections in a principled way. At the core of HW is the Linear Merging Projection (LMP), a new projection technique designed to minimize between-class variance, thereby promoting smooth alignment between distributions. HW incorporates random directions as well to achieve a balance between structural awareness and projection diversity. We evaluate HW across a range of synthetic and real-world benchmarks, including color transfer and distribution alignment tasks, to demonstrate the favorable performance of the proposed HW.
- Research Article
- 10.3390/computation14030060
- Mar 2, 2026
- Computation
- Ion Mălăel
With the growth of urban zones and the increasing need for energy, the use of renewable energy solutions in the built environment becomes a must. Due to their small size and the ability to capture wind from any direction, vertical-axis wind turbines are an alternative to conventional wind energy generators. However, the use of these turbines in the built environment faces difficulties due to performance inefficiencies, particularly because of the intricate aerodynamic characteristics of the blades. This work investigates a method for increasing the efficiency of VAWTs by addressing blade-to-blade interactions using Computational Fluid Dynamics simulations. The research aims to improve turbine design for urban locations, which motivates the application context of the study. The present numerical model employs a uniform inflow to isolate blade–blade interaction mechanisms under controlled conditions. The paper presents a design that minimizes aerodynamic losses, decreases turbulence-induced drag, and increases overall energy capture efficiency by modeling different blade configurations and their interactions. The performance of four asymmetric configurations of blade chord and radius was numerically studied and compared to a symmetric configuration.
- Research Article
- 10.3390/computation14030063
- Mar 2, 2026
- Computation
- Himanshu Rana + 1 more
Prediction of fatigue failure in concrete structures remains a major challenge due to progressive material degradation. Reliable prediction, therefore, requires modeling the 3D heterogeneous microstructure of concrete to explain the underlying mechanisms governing fatigue failure. While such mesoscale models can reliably predict the fatigue-induced fracture mechanisms, the identification of the associated material parameters remains a significant challenge due to the high-dimensional parameter space introduced by the model. The key challenge addressed in this study is to capture microcrack initiation and coalescence under fatigue loading, using a model capable of representing fracture process: crack initiation, crack propagation, and final failure. Firstly, concrete domain is discretized into Voronoi cells, enabling explicit representation of aggregates and mortar by randomly assigning cohesive links connecting Voronoi cells as aggregates and mortar. After this, mortar links are modeled as coupled damage–plasticity 3D Timoshenko beam elements with nonlinear kinematic hardening and isotropic softening introduced using embedded discontinuity formulation, enabling fracture Modes I–III, whereas aggregate links are modeled as elastic 3D Timoshenko beam elements. The model efficiency is additionally reinforced by using surrogate model approach, with corresponding material parameter identification carried out by multi-objective Bayesian optimization framework to reproduce experimental results. The performance of the proposed model is illustrated by reproducing experimental results obtained from concrete cube compression test and three-point bending test under low-cycle fatigue loading, where the errors between experimental and numerical results are reduced by 82% (stress) and 88% (energy) for the cube test and by 86% (force) and 93% (energy) for the bending test, relative to the initial dataset error.