Articles published on Constrained optimization
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5526 Search results
Sort by Recency
- New
- Research Article
2
- 10.1016/j.renene.2025.124064
- Jan 1, 2026
- Renewable Energy
- Cong Xu + 1 more
Coordinated dispatch of electric, thermal, and hydrogen vectors in renewable-enriched microgrids using constrained harris hawks optimization under uncertainty
- New
- Research Article
- 10.1016/j.swevo.2025.102246
- Jan 1, 2026
- Swarm and Evolutionary Computation
- Andrejaana Andova + 3 more
A methodology for multi-label algorithm selection in constrained multiobjective optimization
- New
- Research Article
- 10.1016/j.ejor.2025.07.002
- Jan 1, 2026
- European Journal of Operational Research
- Fan Yu + 2 more
Decision space dynamic niching-based method for constrained multiobjective evolutionary optimization
- New
- Research Article
- 10.1016/j.najef.2025.102568
- Jan 1, 2026
- The North American Journal of Economics and Finance
- Vasileios Gkonis + 2 more
Constrained portfolio optimization via Artificial Gorilla Troops: Benchmarking against swarm-intelligence metaheuristic algorithms
- New
- Research Article
- 10.1016/j.eswa.2025.128830
- Jan 1, 2026
- Expert Systems with Applications
- Dezheng Zhang + 4 more
A multi-population evolutionary algorithm based on constraint grouping for constrained multiobjective optimization problems
- New
- Research Article
- 10.1016/j.jics.2025.102377
- Jan 1, 2026
- Journal of the Indian Chemical Society
- Congxue Tian
Thermodynamic and economic constrained optimization of TiO2 leaching from acid-decomposed ilmenite via response surface methodology
- New
- Research Article
- 10.1016/j.automatica.2025.112575
- Jan 1, 2026
- Automatica
- Yi Huang + 3 more
Distributed stochastic constrained optimization with constant step-sizes via saddle-point dynamics
- New
- Research Article
1
- 10.1016/j.apacoust.2025.111058
- Jan 1, 2026
- Applied Acoustics
- Yang Zhao + 3 more
Constrained optimization of acoustic contrast for personal sound zones based on array effort control
- New
- Research Article
- 10.1039/d5mh01389e
- Jan 1, 2026
- Materials horizons
- Cheng-Ti Hu + 9 more
Developing sustainable, high-performance elastomers for tire applications has become a growing priority for the chemical industry, driven by environmental mandates and the functional demands of modern transportation. In response, additive engineering is increasingly employed to replace conventional silane coupling agents (SCAs), which raise environmental concerns and constrain optimization of the rolling resistance (RR)-wet grip (WG) trade-off. A central challenge in this domain lies in elucidating how interfacial modifiers reconfigure filler architecture and influence macroscopic properties. In this study, we introduce a novel small angle X-ray scattering (SAXS)-guided analytical framework that integrates a mass-fractal model with a gel-like network model to resolve the hierarchical three-tiered structure of poly(ethylene glycol) (PEG)-modified, silica-filled tire compounds. This hybrid model enables the quantitative extraction of cluster radius and-critically-the contribution of occluded rubber domains, a morphological feature often suggested visually but seldom structurally characterized. In contrast to a widely used SCA, which enhances filler dispersion via covalent silica-rubber linkages, PEG induces greater filler aggregation and occluded rubber formation through hydrogen bonding, while simultaneously promoting interfacial slippage under dynamic strain. These coexisting mesoscale features-quantified via SAXS and directly linked to dynamic mechanical properties-result in a 40% reduction in RR, a 14% enhancement in WG, and 81% higher stiffness relative to the SCA-modified system. This mechanistic breakthrough diverges from conventional dispersion-centric frameworks and establishes PEG as a viable SCA-free alternative. More broadly, this work demonstrates a transferable, structure-informed strategy for the design of next-generation high-performance, environmentally friendly rubber nanocomposites.
- New
- Research Article
- 10.22266/ijies2025.1231.36
- Dec 31, 2025
- International Journal of Intelligent Engineering and Systems
Black-breasted Lapwing Algorithm (BBLA): A Novel Nature-inspired Metaheuristic for Solving Constrained Engineering Optimization
- New
- Research Article
- 10.1088/1361-6501/ae3253
- Dec 31, 2025
- Measurement Science and Technology
- Hao Zhao + 5 more
Abstract In high-precision equipment such as launch vehicles and aero-engines, the stress-free assembly of rigid tubes is a critical factor in ensuring operational reliability; however, the prevalence of manufacturing and positioning deviations has rendered this a persistent and unresolved challenge within the industry. While existing research primarily focuses on the derivation of theoretical pipeline parameters, it largely overlooks the critical influence of end face machining parameters on the assembly workflow. This oversight necessitates iterative trimming of tube ends during actual production, severely constraining assembly efficiency. Consequently, this significant research gap at the practical implementation level has yet to receive sufficient attention. To address this issue, this paper proposes a novel flexible assembly method governed by multi-objective constraints. The method begins by employing laser scanning to capture the assembly environment and determine installation boundaries, enabling the adaptive modeling of tube geometry. On-site machining is then performed to mitigate dimensional uncertainties caused by environmental variation. A multi-objective optimization model is developed to determine cutting parameters, incorporating three critical constraints: tube length, horseshoe port misalignment, and weld surface perpendicularity. These are formulated within a nonlinear constrained optimization framework to achieve one-time end-face machining. The proposed method is validated through physical model experiments and field trials on rocket tubes. The physical model experimental results demonstrate an average assembly gap within 0.2 mm and an average misshapen edge within 0.035 mm. Furthermore, the rocket field trial confirms that the optimized parameters achieve successful one-time assembly, strictly satisfying stress-free requirements and significantly enhancing overall efficiency.
- New
- Research Article
- 10.22266/ijies2025.1231.21
- Dec 31, 2025
- International Journal of Intelligent Engineering and Systems
Carpenter Optimization Algorithm: A Human-inspired Metaheuristic for Robust and Efficient Constrained Optimization
- New
- Research Article
- 10.1002/mp.70262
- Dec 31, 2025
- Medical physics
- Xin Tong + 7 more
Lattice radiotherapy (LATTICE) is a technique of spatially fractionated radiation therapy (SFRT) that delivers high radiation doses to specific regions (vertices) within a large tumor, forming a spatially modulated "lattice" pattern, while surrounding areas receive lower doses to minimize damage to healthy tissues. Although the original conception of LATTICE did not prescribe any rigorous symmetry, and early clinical implementations relied on manual vertex placement tailored to tumor shape and anatomical constraints, more recent automated approaches have introduced regular patterns such as simple cubic or hexagonal arrangements. These rigid configurations, while convenient, may reduce the flexibility needed to accommodate irregular tumor geometries and nearby critical structures, potentially resulting in unintended hotspots or under-treatment. Optimizing the placement of vertices in LATTICE is beneficial for precisely targeting high-dose regions within the tumor while minimizing radiation exposure to adjacent healthy tissue, but there is still no optimization method available for solving the positions of fully flexible placed vertices. The great challenge in such optimization lies in handling the constraints on the relative positions between different vertices. This work aims to develop a new treatment planning method for LATTICE with fully flexible placement of vertices and simultaneous optimization of the position of each lattice vertex and dose, to improve overall plan quality compared with conventional LATTICE planning methods relying on manual regular placements of lattice vertices. The proposed method simultaneously optimizes each lattice vertex position and other plan optimization variables (proton spot weights or photon fluences) during the dose optimization process. This is formulated as a new constrained optimization problem by adding each lattice vertex position to optimization variables with appropriate constraints to meet the requirements of the LATTICE vertices placement guideline on the 1) center-to-center distance between lattice vertices and 2) distance of lattice vertices to the target boundaries. The optimization problem is solved by the alternating direction method of multipliers and iterative convex relaxation methods. Plans generated using our proposed method (NEW) were compared with conventional LATTICE plans for two representative patient cases presented in the main manuscript: one abdominal and one lung tumor. To maintain brevity, two additional patient cases are included in the Supporting Information to further demonstrate the performance and generalizability of the proposed method. For each case, we generated 100 LATTICE plans with varying vertex positions. From these, three plans-termed BEST, MID, and WORST-were selected based on the largest, median, and smallest total optimization objective value . All LATTICE plans optimized with the NEW method showed results comparable to, or better than, the BEST plans. For example, for photon LATTICE abdomen plans, the values of F were 1.92 (NEW), 2.79 (WORST), 2.27 (MID), and 1.96 (BEST), representing a 31.1% improvement from WORST to NEW; the PVDR values were 5.88 (NEW), 3.00 (WORST), 4.33 (MID), and 5.16 (BEST), representing 96.0% and 14.0% improvements relatively from WORST and BEST, respectively to NEW. A new LATTICE treatment planning approach is introduced, in which lattice positions are fully flexible and optimized simultaneously with dose distribution, leading to improved target PVDR and OAR sparing compared to conventional LATTICE methods with regularly spaced vertices.
- New
- Research Article
- 10.3390/s26010148
- Dec 25, 2025
- Sensors (Basel, Switzerland)
- Chunyu Yang + 6 more
Sixth-generation (6G) wireless systems aim to integrate terrestrial, aerial, and satellite networks to support large-scale remote sensing and service delivery. In such non-terrestrial networks (NTNs), channels change quickly and the multi-tier architecture is heterogeneous, which makes real-time channel state acquisition and cooperative resource scheduling difficult. This paper proposes an FMA-MADDPG framework that combines a channel prediction module with a constraint-based multi-agent deep deterministic policy gradient scheme. The Fusion of Mamba and Attention (FMA) predictor uses a Mamba state-space backbone and a multi-head self-attention block to learn both long-term channel evolution and short-term fluctuations, and forecasts future CSI. The predicted channel information is added to the agents’ observations so that scheduling decisions can take expected channel variations into account. A constraint-based reward is also designed, with explicit performance thresholds and anti-idle penalties, to encourage fairness, avoid free-riding, and promote cooperation among heterogeneous agents. In a representative NTN uplink scenario, the proposed method achieves higher total reward, efficiency, load balance, and cooperation than several DRL baselines, with relative gains around 10–20% on key metrics. These results indicate that prediction-aware cooperative reinforcement learning is a useful approach for resource optimization in future 6G NTN systems.
- New
- Research Article
- 10.22399/ijcesen.4555
- Dec 24, 2025
- International Journal of Computational and Experimental Science and Engineering
- Sivaramakrishnan Vaidyanathan
ML systems in production have to address many challenges while ensuring consistency between the features in the training and serving phases. Feature Stores have emerged as one of the key ML infrastructure components to bridge the training and serving gaps. There are tradeoffs between different types of FS, such as latency, consistency guarantees, costs, and operational complexity. Organizations often do not have formal governance frameworks for governing Machine Learning pipelines. One example of the issues that can arise from insufficient frameworks is Training-Serving Skew, whereby feature statistics differ between environments. This leads to challenges in ensuring regulatory compliance and the ability to trace the lineage of features for model auditability and reproducibility. This presents a two-part formal model that enables mathematical optimization and structured governance. The first half frames the FS selection process as a constrained optimisation problem so that the performance of dual-database architectures can be quantitatively compared to that of unified architectures based on business priorities. The second half introduces Versioned Feature Descriptors that are canonical metadata artifacts for the permanent storage of feature definitions, complete lineage from raw data to prediction outputs, and fully machine-enforceable compliance policies. The optimization framework models serving latency, consistency gap, capital expense, and operational complexity for dual-database systems (one for online and another for offline workloads) and for unified systems (which house both workloads). The governance model prevents training-serving skew through runtime validation, ensuring that features input to a deployed model come from the desired descriptor version. Privacy and retention requirements are enforced by formal policy predicates, with the review process showing improvements in operational cost, debugging, audit, and regulatory compliance efforts. The framework formalizes Feature Store architecture evaluation, transforming decision-making from heuristic-based to a systematic architecture evaluation approach based on quantitative analysis for scalable and compliant machine learning adoption.
- New
- Research Article
- 10.3390/automation7010002
- Dec 23, 2025
- Automation
- Nitish Katal + 2 more
Quantitative Feedback Theory (QFT) enables the control system to guarantee stability and performance in the presence of plant uncertainty, thus offering a quantitative and less conservative framework for designing robust yet practical controllers. The presented work investigates a single-stage constraint optimization-based approach for synthesizing controllers for the ship roll stabilization. The typical QFT loop shaping is a manual two-stage procedure that demands a proficient understanding of loop-shaping principles on Nichols charts. The proposed procedure simplifies the QFT synthesis process by introducing a single-stage method that allows for concurrent synthesis of both the QFT controller and pre-filter. The present work considers the synthesis of fractional order controllers (using the FOMCON toolbox). The proposed method also enables the designer to pre-specify the controller architecture at the beginning of the design procedure. A comparative analysis with the controllers obtained using the QFT toolbox, Ziegler–Nichols, H∞, IMC, and MPC have also been presented in the work. The implementation has been carried out for the ship roll stabilization, which is one of the critical problems in marine engineering, as it directly impacts the vessel safety, operational efficiency, and passenger comfort, wherein excessive roll can lead to reduced propulsion efficiency. The obtained results highlight that the proposed controller performs better than the benchmark controllers, and Monte Carlo simulations have also been included to support the results.
- New
- Research Article
- 10.21595/vp.2025.25484
- Dec 22, 2025
- Vibroengineering Procedia
- Viktor Tokai + 4 more
This paper presents a kinematic synthesis of a groove-type disk cam that directly drives sliders in a novel internal-combustion engine architecture. The synthesis is formulated in an invariant (normalized) space and enforces zero acceleration at phase boundaries while embedding a quasi-constant-velocity segment in the mid-portion of the compression (retraction) phase. An arbitrary shaping function is introduced to generate a family of admissible motion laws; a constrained optimization (series truncated to four terms) minimizes the peak acceleration under a prescribed bound on velocity, yielding a PLM with a quasi-constant-velocity interval of approximately 39 % of the kinematic cycle (±5 %). The synthesized retraction law is paired with a sinusoidal approach (power) law to ensure zero endpoint accelerations for both phases. Cam profiles for the working and return strokes are constructed; maximum pressure angles remain within admissible limits across examined phase splits, including an experimental 65°/25° case. Compared with the sinusoidal baseline, the synthesized law retains a similar acceleration constant but reduces the velocity constant by approximately 31 %, indicating lower inertial loading and milder end-conditions that are favorable for mixture preparation and bearing lubrication. The results provide a compact, implementable route to motion programming for cam-driven reciprocators in internal-combustion engines and establish feasibility for multi-cylinder layouts.
- New
- Research Article
- 10.3390/a19010011
- Dec 22, 2025
- Algorithms
- Nikolaos P Theodorakatos + 2 more
In constrained nonlinear optimization, we aim to achieve two goals: one is to minimize the objective function, and the other is to satisfy the constraints. A common way to balance these competing targets is to use penalty functions. Suppose that an algorithm generates a descent direction and produces a step that decreases the objective function value but increases the constraint violation—a phenomenon known as the Maratos effect. This leads to the rejection of the full step by the non-smooth penalty function; therefore, superlinear convergence is not preserved. This work leverages a piecewise convexity model to solve the optimal PMU placement. A quadratic objective function is minimized subject to a non-convex equality constraint within box constraints [0, 1] × [0, 1] ⊂ R2. The initial non-convex region is reconsidered as a union of piecewise line segments. This decomposition enables algorithms to converge to a local optimum while preserving superlinear convergence near the solution. An analytical solution is presented using the Karush–Kuhn–Tucker conditions. First-and-second-order optimality conditions are applied to find the local minimum. We show how the Maratos effect is avoided by adopting the piecewise convexity without needing a non-smooth penalty function, second-order corrections or employing the watchdog methods. Simulations demonstrate that the algorithms partially search the space along the line segments—avoiding zig-zag trajectories—and reach (0, 1) or (1, 0), where both feasibility and optimality are satisfied at once.
- New
- Research Article
- 10.4314/saaj.v25i1.11c
- Dec 22, 2025
- South African Actuarial Journal
- M Malwandla + 2 more
We present the Resource Allocation Transformer, a deep learning framework that learns portfolio-level relationships directly through attention mechanisms, capturing how assets work together rather than only how they perform individually. Most machine learning approaches to portfolio construction reduce allocation to aggregating independent asset predictions, overlooking the complementarity between assets that drives optimal portfolios. Unlike traditional predict-then-optimise pipelines, or Economic Scenario Generators that separate the modelling of economic variables from the optimisation step, the Resource Allocation Transformer integrates correlation structure and optimisation logic within a single differentiable architecture. The framework learns constraint-satisfying allocations through self-supervised exposure to synthetic optimisation problems, providing a more stable alternative to sequential prediction-optimisation workflows. Empirical validation shows effective transfer learning from synthetic curricula to real Johannesburg Stock Exchange data (2005–2024), with the same trained model handling portfolios of varying sizes and across market regimes without retraining. By directly learning to allocate, the Resource Allocation Transformer establishes a new paradigm for asset allocation that adapts through experience rather than requiring problem-specific recalibration.
- Research Article
- 10.1080/19475705.2025.2601822
- Dec 15, 2025
- Geomatics, Natural Hazards and Risk
- Yang Liu + 3 more
ABSTRACT Climate change exacerbates geospatial vulnerability, rendering populations more susceptible to environmental disasters. Vulnerable groups face heightened disaster risks, necessitating targeted strategies. This study proposes a multi-group constrained system optimization (MGCSO) framework with equity as its objective, integrating vulnerability assessment with evacuation planning. Firstly, based on online comment analysis, it employs natural language processing (NLP) techniques to extract public perceptions and assess vulnerability disparities among different groups. Subsequently, an enhanced MGCSO algorithm, integrated with GIS, generates customised evacuation plans based on each group's ‘tolerance.’ To validate the framework's efficacy, a case study was conducted in Yucheng District, Yaan City, China. Compared to conventional methods, this framework significantly improves evacuation efficiency for vulnerable groups, reducing total evacuation time by 23%. This study provides governments with an evidence-based vulnerability assessment methodology and decision support. It not only enhances disaster resilience but also specifically supports the effective assistance of vulnerable groups during flood emergencies.