- New
- Research Article
- 10.1093/jcde/qwag025
- Mar 11, 2026
- Journal of Computational Design and Engineering
- Zahid Masood + 3 more
Abstract This work investigates the use of physics-informed geometric operators in tandem with latent variable models to support surrogate learning, dimensionality reduction, and generative design of airfoils. A baseline dataset was constructed via a NURBS-based airfoil parametric model with physically interpretable design variables, and discretized using two schemes: a uniform parametric and a uniform arc-length sampling. A hybrid Variational AutoEncoder (VAE) with the addition of convolutional layers was developed and iteratively refined to evade shape invalidities. A systematic comparison of reconstruction accuracy, robustness, and diversity showed that a loss function based on the mean sum of squared distances had the best performance with sufficient stability during model optimization. However, this can be only established when the reconstruction and Kullback-Leibler terms in the β-VAE objective function are weighted via an appropriately selected β value. Additionally, augmenting geometry with physics-correlated high-level descriptors, such as geometric moments, further improves latent-space quality. Among the tested operators, third-order geometric moments yielded the most consistent robustness gains. Discretization and achieved diversity proved to be linked, with uniform arc-length spacing achieving the best reconstruction accuracy, but with many near-identical designs that degraded the resulting diversity. In contrast, uniform parametric spacing exhibited higher diversity without the need for any special treatment of design distributions and diversity quantification measures. This study consolidates practical guidelines on architecture, loss-function scaling, physics-informed features, and quantification protocols for reliable, data-efficient airfoil generative design with VAEs.
- New
- Research Article
- 10.1093/jcde/qwag020
- Mar 10, 2026
- Journal of Computational Design and Engineering
- Qian Shi + 4 more
Abstract In constrained multi-objective optimization problems (CMOPs), the discontinuity of the target space and the fragmentation of the feasible solution space caused by complex constraints make the optimization algorithm face irreconcilable conflicts between convergence, diversity, and feasibility. To this end, this paper proposes a dual population co-evolution algorithm based on a dynamic Manhattan-Harmony hybrid distance. The algorithm constructs a main and auxiliary population with complementary structures: the main population focuses on deep search in the feasible domain, the auxiliary population conducts global exploration in the infeasible area, and introduces an evolutionary stage perception mechanism for differentiated environmental selection. In particular, the proposed dynamic Manhattan-Harmony hybrid distance can effectively characterize the convergence and diversity characteristics of individuals and guide the auxiliary population to adopt adaptive selection strategies at different stages. In addition, the algorithm draws on the theory of biological potential energy diffusion and designs a dynamic resource allocation mechanism that combines three types of potential energy: goal orientation, constraint recovery, and structural diversity, to achieve adaptive scheduling of offspring resources. Furthermore, the constructed bidirectional knowledge transfer channel realizes information sharing and co-evolution between the main and auxiliary populations. Experimental results on 33 standard test functions and 12 real-world problems show that HDCMO outperforms many existing representative constrained multi-objective evolutionary algorithms in terms of convergence, feasibility, and distribution balance, and has significant performance advantages and adaptability.
- New
- Research Article
- 10.1093/jcde/qwag019
- Mar 10, 2026
- Journal of Computational Design and Engineering
- Shiwei Chen + 5 more
Abstract In constrained multi-objective optimization (CMOP), effectively exploiting infeasible solutions is essential for global exploration and for accurately approximating the constrained Pareto front (CPF). Nevertheless, when feasible regions are sparse or highly fragmented, many existing methods still suffer from slow feasibility attainment, a high proportion of ineffective evaluations, and inadequate front coverage, leading to premature clustering. Following the principle of “broad exploration first, robust convergence later”, DPNCMO (Novelty-augmented Population-Differentiated Cooperative Multi-objective Optimization) is developed as a cooperative dual-population framework that explicitly decouples exploration from exploitation. The main population is initialized via Latin hypercube sampling and evolved using a genetic algorithm equipped with a feasibility-aware adaptive constraint-relaxation mechanism, which progressively tightens the admissible violation level in response to the evolving feasibility state, thereby steering the search from informative infeasible regions toward accurate CPF refinement. In parallel, an assistant population is randomly initialized and evolved using a DE-based operator with a novelty–crowding synergistic diversity-maintenance mechanism. By constructing a behavior space that integrates objective and constraint information, the mechanism emphasizes novelty-driven selection when feasibility is scarce to enhance coverage and suppress clustering, and then gradually shifts toward crowding-driven exploitation once feasibility becomes sufficient to stabilize convergence and control computational overhead. Collectively, the population-differentiated cooperation, feedback-driven constraint relaxation, and stage-wise novelty-guided selection reduce ineffective evaluations, accelerate feasibility climbing, and improve CPF coverage and robustness on fragmented feasible landscapes. Extensive experiments on 55 CMOP benchmark instances and 12 real-world engineering problems demonstrate that DPNCMO achieves superior or at least comparable performance to representative state-of-the-art optimizers across convergence, distribution, and feasibility, with consistent improvements across multiple metrics.
- New
- Research Article
- 10.1093/jcde/qwag016
- Feb 28, 2026
- Journal of Computational Design and Engineering
- Chunshui Wang + 1 more
Abstract Multimodal industrial anomaly detection (IAD), which integrates RGB and 3D information, has become one of the key technical directions for improving detection robustness and accuracy.Although prevailing cross-modal feature-mapping methods are efficient and lightweight, they still suffer from two major limitations. First, they typically adopt a one-way modeling paradigm that regresses one modality from another and lack explicit interaction within a unified representation space, making it difficult to detect local, small-magnitude anomalies that appear only in a single modality.Second, fusion-reconstruction methods derived from this paradigm rely on a single fusion stream optimized with a reconstruction loss. When trained solely on normal samples, this design can overgeneralize and lacks a parallel branch to enforce consistency constraints on the fused representations, which in turn limits reliable discrimination between normal and anomalous patterns in complex multimodal scenarios. To address these issues, we propose FMFR, a feature-level multistage fusion and remapping framework that jointly models multistage feature fusion and cross-modal remapping. The framework consists of a fusion-reconstruction branch and a remapping-fusion branch, which are jointly constrained by a multi-order consistency loss. In the fusion-reconstruction branch, a reconstruction loss supervises the intermediate fusion layers, encouraging them to learn joint representations that retain complete information and to reconstruct features without losing critical details. In the remapping-fusion branch, the network learns bidirectional mappings between modalities and re-fuses the remapped features, while the multi-order consistency loss is used to align its fused representations with those of the fusion-reconstruction branch. During inference, FMFR jointly leverages intra-modal reconstruction residuals, cross-modal remapping residuals, and the consistency deviation between the fused embeddings of the two branches to construct multi-source anomaly maps. This design forces anomalies to simultaneously violate both intra-modal and cross-modal priors, thereby suppressing the overgeneralization of a single fusion stream and enhancing the visibility of local anomaly structures that exist only in a single modality as well as the overall robustness of anomaly detection. Experimental results on the MVTec 3D-AD dataset demonstrate that FMFR achieves competitive state-of-the-art performance on both anomaly detection and anomaly segmentation tasks.
- New
- Research Article
- 10.1093/jcde/qwag015
- Feb 24, 2026
- Journal of Computational Design and Engineering
- Qiaodi Yuan + 1 more
Abstract Supervised super-resolution methods estimate high-resolution simulation results from low-resolution inputs. These methods typically train the neural networks using pairs of high-resolution data and their down-sampled low-resolution counterparts, implicitly assuming the global similarity between the low-resolution and high-resolution models. This assumption, however, often fails as the low-resolution models usually exhibit stiffer behavior than high-resolution models due to numerical stiffening. This paper proposes a novel supervised super-resolution method to mitigate this problem for linear deformable object simulation. The method constructs training data pairs by matching low-resolution and high-resolution simulation snapshots based on the similarity of the normalized strain energy, the normalized temporal change rate of strain energy, and the modal contributions to the overall strain energy. The loss function also incorporates the equation residuals derived from the finite element method. The time integration scheme is selected by examining the eigenvalue distributions of the neural tangent kernel associated with the finite element residuals. Compared with previous down-sampling methods, the proposed method reduces the maximum relative displacement error by 25%, 54% and 38% for the beam, elephant and armadillo models, respectively.
- Research Article
- 10.1093/jcde/qwag008
- Feb 2, 2026
- Journal of Computational Design and Engineering
- Jaeik Bae + 1 more
Abstract We introduce TDiff-HSI, a diffusion-based model that can generate hyperspectral images (HSIs) directly from RGB images and material-wise segmentation masks. HSI provides both spatial (u, v) and spectral (λ) information. The accompanying dataset that we are releasing spans wavelengths in the range from 420 to 1728 nm, digitized into 512 channels. Directly handling this immense three-dimensional dataset is computationally prohibitive and often leads to numerical errors. To address this challenge, TDiff-HSI leverages Tucker decomposition to reduce dimensionality, enabling more stable and efficient processing. Moreover, spectral precision is enhanced by combining RGB channels with a material segmentation mask. To support this research, we constructed a new dataset using a hyperspectral camera. The dataset comprises 40 014 RGB-HSI pairs across 78 scenes, featuring 12 objects with corresponding polygonal segmentation labels. Experimental evaluation demonstrates that TDiff-HSI achieves state-of-the-art performance verified on the existing dataset. For the new dataset that we are releasing, we establish new benchmarks of MRAE 0.2169, RMSE 0.0192, PSNR 36.46 dB, SAM 0.0424, and SSIM 0.9327 Project and dataset are available at https://github.com/JaeikBae/TDiff-HSI
- Research Article
- 10.1093/jcde/qwag007
- Jan 29, 2026
- Journal of Computational Design and Engineering
- Yongchao Li + 4 more
Abstract The resolution of constrained multiobjective optimization problems involves simultaneously optimizing multiple objectives while adhering to specific constraints. Effective constrained multiobjective optimization solvers must strike a balance between objective optimization and constraint satisfaction. This study introduces a novel approach that compresses the mapping between the objective and constraint spaces to obscure their explicit information. This compression is achieved by normalizing both spaces and projecting them onto an n-dimensional spherical volume. The key advantage of this compression mapping is that it reduces fine-grained information redundancy while retaining the dominant convergence trend toward the ideal region, thereby providing a compact surrogate view for selection under constraints. Although this transformation inevitably causes some information loss, it retains the key property of convergence toward the origin. Based on the mapped space, a selection mechanism combining fast non-dominated sorting and crowding distance is employed to guide the population across infeasible regions and maintain diversity. To further enhance computational efficiency, a dynamic archiving strategy is introduced. This mechanism regulates the number and quality of archived individuals, storing only valuable solutions and comparing offspring with existing archives to prevent redundant evaluations. These components are integrated into a new constrained multiobjective evolutionary algorithm. The algorithm is comprehensively validated on 42 benchmark functions and 12 real-world problems. Experimental results demonstrate that the proposed method achieves superior performance compared to state-of-the-art constrained multiobjective evolutionary algorithms in terms of convergence and computational efficiency.
- Research Article
- 10.1093/jcde/qwag003
- Jan 20, 2026
- Journal of Computational Design and Engineering
- Siyang Chang + 5 more
Abstract The aircraft engine, as the “power core” of an aircraft, has its operational efficiency and state stability directly linked to flight safety and passenger life assurance. The prediction of remaining useful life (RUL) is crucial for fault prognostics and health management (PHM) of engine systems. However, current RUL prediction methods face two gaps: 1) The current average accuracy of engines operating under different conditions and experiencing various damage modes requires improvement. 2) These methods overlook reducing uncertainty from the perspective of their framework design, which is crucial for industry decision-making. To bridge these gap, a multiscale mixed-learning and evaluation prediction method (MMEPTMIXER), along with the Multiscale Fusion Temporal Convolutional Network-Deep Time-series mixer (MFTCN-DTSmixer) integrated within it, is proposed to implement the RUL prediction. MMEPTMIXER combines maximum mean deviation with Multi-Level Quantile Loss (MLQL) to reduce uncertainty regions and improve model understanding of data. MFTCN-DTSmixer employs MFTCN and TSmixers' parallelization to enhance local and global temporal feature extraction, improving prediction accuracy and reducing predictive uncertainty. Finally, MMEPTMIXER was applied to the C-MAPSS dataset, compared with 26 state-of-the-art methods, achieving an average root mean square error reduction rate of 10.16% and an average Score reduction rate of 28.07% across the corresponding four datasets. Furthermore, the method delivers accurate and robust RUL prediction results for the N-CMAPSS dataset. This research provides a robust supplement for the RUL prediction of aircraft engines as well as predictive maintenance in multi-sensor systems.
- Research Article
- 10.1093/jcde/qwag004
- Jan 19, 2026
- Journal of Computational Design and Engineering
- Jiho Shim + 3 more
Abstract Thermal interface materials serve as critical components for facilitating heat transfer between cells and the cooling system in electric vehicle battery packs. However, internal porosity introduced during manufacturing can substantially reduce their thermal conductivity. In this study, the internal pore structure of actual thermal interface material samples was characterized using X-ray computed tomography, and porosity was quantitatively determined through a regression-based thresholding method implemented in MATLAB. The resulting pore distributions were incorporated into finite volume-based thermal simulations in Ansys Fluent, enabling the calculation of effective thermal conductivity as a function of porosity ratio. The simulated effective thermal conductivity values closely matched those predicted by the Effective Medium Theory, particularly at low porosity levels. In contrast, conjugate thermal-fluid analyses of electric vehicle battery packs revealed that local temperature increases of up to 1.3°C can occur depending on pore location and distribution. These findings indicate that Effective Medium Theory-based average conductivity models are inadequate for capturing localized thermal hotspots. Consequently, thermal interface material application processes and design strategies should address not only the allowable porosity threshold but also the spatial distribution of pores to ensure robust thermal management in electric vehicle battery systems.
- Research Article
- 10.1093/jcde/qwag001
- Jan 5, 2026
- Journal of Computational Design and Engineering
- Dezhen Wang + 4 more
Abstract Automated polyp segmentation, which aims to accurately delineate polyp regions from colonoscopy images, is a critical task for computer-aided diagnosis in colorectal cancer prevention. Although many deep learning-based models have been proposed for this task, there are still some challenges. First, existing models still suffer performance degradation when confronted with small polyps, blurry boundaries, and cross-dataset testing, indicating limited robustness and generalization. Second, existing approaches predominantly focus on visual features, leaving the potential guidance of textual semantic information largely unexplored. To address these problems, we propose a novel Large-Model Semantics-Guided Network (LMSGNet) that leverages semantic guidance to achieve high-precision polyp segmentation. Specifically, we introduce semantic prompts encoded by Contrastive Language-Image Pre-Training(CLIP), employ a Multi-level Memory Router (MMR) to dynamically select relevant semantics, and incorporate a Cross-modal Attention (CMA) mechanism to enable bidirectional interactions between visual and semantic features, thereby enhancing global semantic consistency. In addition, we design a Semantic-Edge Guided Block (SEGB) combined with Multi-scale Edge Features (MSEF) to refine ambiguous boundaries and small targets, yielding synergistic global-local enhancement. Extensive experiments on five public datasets demonstrate that our model consistently outperforms existing state-of-the-art models. Two ablation studies respectively demonstrate the effectiveness of different modules and the contribution of semantic guidance.