Published in last 50 years
Articles published on Neighborhood Search
- New
- Research Article
- 10.3390/drones9110767
- Nov 6, 2025
- Drones
- Shangjie Li + 4 more
Existing research has made significant strides in UAV formation control, particularly in active localization and certain passive methods. However, these approaches face substantial limitations in electromagnetically silent environments, often relying on strong assumptions such as fully known and stationary emitter positions. To overcome these challenges, this paper proposes a comprehensive framework for bearing-only passive localization and adjustment of UAV formations under strict electromagnetic silence constraints. We systematically develop three core models: (1) a geometric triangulation model for scenarios with three known emitters, enabling unique target positioning; (2) a hierarchical identification mechanism leveraging an angle database to resolve label ambiguity when some emitters are unknown; and (3) a cyclic cooperative strategy, Perceive-Explore-Judge-Execute (PEJE), optimized via an improved genetic algorithm with adaptive discrete neighborhood search (GA-IADNS), for dynamic formation adjustment. Extensive simulations demonstrate that our proposed methods exhibit strong robustness, rapid convergence, and high adjustment accuracy across varying initial deviations. Specifically, after adjustment, the maximum radial deviation of all UAVs from the desired position is less than 0.0001 m, and the maximum angular deviation is within 0.00013°; even for the 30%R initial deviation scenario, the final positional error remains negligible. Furthermore, comparative experiments with a standard Genetic Algorithm (GA) confirm that GA-IADNS achieves superior performance: it reaches stable peak average fitness at the 6th generation (vs. no obvious convergence of GA even after 20 generations), reduces the convergence time by over 70%, and improves the final adjustment accuracy by more than 95% relative to GA. These results significantly enhance the autonomous collaborative control capability of UAV formations in challenging electromagnetic conditions.
- New
- Research Article
- 10.3390/computation13110262
- Nov 6, 2025
- Computation
- Jingbo Huang + 6 more
Topology optimization (TO) with the variable density concept has made significant advancements in academic research and engineering applications; yet it still encounters obstacles associated with computer inefficiencies in the filtering process. This work introduces a novel filter implementation method that significantly enhances the optimization process by adapting the k-d tree data structure. The proposed method converts traditional neighborhood search operations into extremely efficient spatial searches while preserving solution accuracy. This method inherently accommodates a comprehensive array of manufacturability constraints, including symmetry, local volume control, periodic patterning, stamping-oriented overhang control, and more, without compromising computational duration. Extensive numerical examples validate the proposed method’s efficiency yielding precise, scalable designs, achieving substantial acceleration relative to conventional methods The method demonstrates specific advantage in large scale optimization challenges and intricate complex geometric restrictions, encompassing unstructured meshes. This study explores a new paradigm for effective constraint integration in topology optimization through advanced data structures, providing extensive applicability in engineering design.
- New
- Research Article
- 10.3390/pr13113574
- Nov 5, 2025
- Processes
- Haiwei Li + 4 more
On large-scale product assembly lines, such as those used in aircraft manufacturing, multiple assembly positions and devices often coexist within a single workstation, leading to complex task interactions. As a result, the problem of parallel task execution within workstations must be effectively addressed. This study focuses on positional and equipment conflicts within workstations. To manage positional and equipment conflicts, a multi-objective optimization model is developed that integrates assembly sequence planning with the first type of assembly line balancing problem. This model aims to minimize the number of workstations, balance task loads, and reduce equipment procurement costs. An improved NSGA-II algorithm is proposed by incorporating artificial immune algorithm concepts and neighborhood search. A selection strategy based on dominance rate and concentration is introduced, and crossover and mutation operators are refined to enhance search efficiency under restrictive parallel constraints. Case studies reveal that a chromosome concentration weight of about 0.6 yields superior search performance. Compared with the traditional NSGA-II algorithm, the improved version achieves the same optimal number of workstations but provides a 5% better workload balance, 2% lower cost, a 76% larger hyper-volume, and a 133% increase in Pareto front solutions. The results demonstrate that the proposed algorithm effectively handles assembly line balancing with complex parallel constraints, improving Pareto front quality and maintaining diversity. It offers an efficient, practical optimization strategy for scheduling and resource allocation in large-scale assembly systems.
- New
- Research Article
- 10.3390/electronics14214331
- Nov 5, 2025
- Electronics
- Dimitrios Karapiperis + 2 more
The proliferation of algorithmically generated malicious URLs necessitates a shift from syntactic detection to semantic analysis. This paper introduces PhishGraph, a disk-aware Approximate Nearest Neighbor (ANN) search system designed to perform billion-scale semantic similarity searches on URL embeddings for threat intelligence applications. Traditional in-memory ANN indexes are prohibitively expensive at this scale, while existing disk-based solutions fail to address the unique challenges of the cybersecurity domain: the high velocity of streaming data, the complexity of hybrid queries involving rich metadata, and the highly skewed, adversarial nature of query workloads. PhishGraph addresses these challenges through a synergistic architecture built upon the foundational principles of DiskANN. Its core is a Vamana proximity graph optimized for SSD residency, but it extends this with three key innovations: a Hybrid Fusion Distance metric that natively integrates structured attributes into the graph’s topology for efficient constrained search; a dual-mode update mechanism that combines high-throughput batch consolidation with low-latency in-place updates for streaming data; and an adaptive maintenance policy that monitors query patterns and dynamically reconfigures graph hotspots to mitigate performance degradation from skewed workloads. Our comprehensive experimental evaluation on a billion-point dataset demonstrates that PhishGraph’s adaptive, hybrid design significantly outperforms strong baselines, offering a robust, scalable, and efficient solution for modern threat intelligence.
- New
- Research Article
- 10.1080/23302674.2025.2573992
- Nov 5, 2025
- International Journal of Systems Science: Operations & Logistics
- Selva Kumar Chandrasekar + 2 more
Industries that can manufacture a diverse range of products tailored to customer needs are well-positioned to capitalize on market opportunities. However, managing a diverse product portfolio can place significant strain on both personnel and machinery on the manufacturing shop floor. Relying on manual or traditional scheduling methods for such a complex environment often leads to inefficiencies, as these methods struggle to optimize production schedules within constraints like machine availability and capability. This challenge is known as the Flexible Job-Shop Scheduling Problem (FJSP). This article proposes a Jaya-Tabu search Algorithm (JTA) to address the FJSP by generating optimal production schedules aimed at minimizing makespan, idle time, and tardiness. The JTA leverages the evolutionary process of the Jaya algorithm and the neighborhood search technique of the Tabu search algorithm to avoid local minima. Compared to other heuristic techniques available in literature, the proposed JTA demonstrates superior performance in minimizing key production metrics such as makespan, idle time, and tardiness, making it a robust solution for complex manufacturing environments.
- New
- Research Article
- 10.3390/make7040139
- Nov 5, 2025
- Machine Learning and Knowledge Extraction
- Abderrafik Laakel Hemdanou + 5 more
Spectral clustering has established itself as a powerful technique for data partitioning across various domains due to its ability to handle complex cluster structures. However, its computational efficiency remains a challenge, especially with large datasets. In this paper, we propose an enhancement of spectral clustering by integrating Cover tree data structure to optimize the nearest neighbor search, a crucial step in the construction of similarity graphs. Cover trees are a type of spatial tree that allow for efficient exact nearest neighbor queries in high-dimensional spaces. By embedding this technique into the spectral clustering framework, we achieve significant reductions in computational cost while maintaining clustering accuracy. Through extensive experiments on random, synthetic, and real-world datasets, we demonstrate that our approach outperforms traditional spectral clustering methods in terms of scalability and execution speed, without compromising the quality of the resultant clusters. This work provides a more efficient utilization of spectral clustering in big data applications.
- New
- Research Article
- 10.1016/j.asoc.2025.114143
- Nov 1, 2025
- Applied Soft Computing
- Lepa Babic + 7 more
A multivariate methodology combining recurrent neural networks with the modified variable neighborhood search algorithm for unemployment forecasting
- New
- Research Article
- 10.1016/j.cor.2025.107193
- Nov 1, 2025
- Computers & Operations Research
- Malek Masmoudi + 2 more
Generalized variable neighborhood search algorithm for vehicle routing problem with time windows and synchronization
- New
- Research Article
- 10.1016/j.inffus.2025.103272
- Nov 1, 2025
- Information Fusion
- Eduardo V.L Barboza + 4 more
IncA-DES: An incremental and adaptive dynamic ensemble selection approach using online K-d tree neighborhood search for data streams with concept drift
- New
- Research Article
- 10.1016/j.matcom.2025.04.003
- Nov 1, 2025
- Mathematics and Computers in Simulation
- Ziyuan Liang + 1 more
An enhanced Kepler optimization algorithm with global attraction model and dynamic neighborhood search for global optimization and engineering problems
- New
- Research Article
- 10.1016/j.trc.2025.105312
- Nov 1, 2025
- Transportation Research Part C: Emerging Technologies
- Yu Wang + 4 more
Solving the Integrated-tasks satellite range scheduling problem: A surrogate-assisted variable neighborhood search approach
- New
- Research Article
- 10.3390/app152111540
- Oct 29, 2025
- Applied Sciences
- Yang Wang + 2 more
This paper proposes a decorrelation scheme based on product quantization, termed Reference-Vector Removed Product Quantization (RvRPQ), for approximate nearest neighbor (ANN) search. The core idea is to capture the redundancy among database vectors by representing them with compactly encoded reference-vectors, which are then subtracted from the original vectors to yield residual vectors. We provide a theoretical derivation for obtaining the optimal reference-vectors. This preprocessing step significantly improves the quantization accuracy of the subsequent product quantization applied to the residuals. To maintain low online computational complexity and control memory overhead, we apply vector quantization to the reference-vectors and allocate only a small number of additional bits to store their indices. Experimental results show that RvRPQ substantially outperforms state-of-the-art ANN methods in terms of retrieval accuracy, while preserving high search efficiency.
- New
- Research Article
- 10.1002/net.70008
- Oct 29, 2025
- Networks
- Biljana Roljić + 1 more
ABSTRACT This article studies a vehicle routing problem involving a fleet of heavy‐duty vehicles and pickup‐and‐delivery requests for crude items that are both heavy and high‐temperature. The objective is to route the fleet in such a way that maximizes resource efficiency and operational efficiency while simultaneously avoiding thermal overload of any vehicle in the fleet. This is achieved by considering the curb weight of the vehicles and the weight of the items being transported when optimizing vehicle routing. Additionally, transshipments of items en route are considered. To maintain the fleet's mechanical preservation, thermal overload of the vehicles must be avoided. Therefore, temperatures of items and vehicles, as well as their interdependence, are considered. We introduce the vehicle temperature predictor, which is based on Newton's law of cooling and allows us to estimate the vehicle temperature en route. However, considering the vehicle temperature presents two challenges. First, solution feasibility is often jeopardized by vehicle temperature constraints. Second, solutions are produced that are heat‐efficient but not resource‐efficient. Our metaheuristic solution framework addresses both challenges by means of a large neighborhood search using adapted, novel, and feature‐based heuristics. We provide valuable insights into resource‐efficient and heat‐efficient routing policies by experimenting with test instances that mimic real‐world data from a partnered steel plant. Additionally, we present a mixed integer nonlinear programming formulation and demonstrate the effectiveness of our proposed metaheuristic solution approach by obtaining optimal solutions for small test instances.
- New
- Research Article
- 10.1080/17445302.2025.2581063
- Oct 29, 2025
- Ships and Offshore Structures
- Pengfei Lin + 3 more
ABSTRACT Optimized production scheduling is critical for enhancing manufacturing efficiency in ship block assembly workshops, where complex multi-stage processes pose significant planning challenges. This paper investigates a two-stage scheduling problem in ship plane block manufacturing. The first stage involves processing multi-level parts with precedence, while the second stage focuses on assembling these parts into final blocks under material integrity, multi-line allocation, and sequence specifications. This problem can be formulated as the Assembly Job Shop Scheduling Problem with Parts Precedence, Integrity, Multi-line, and Sequence constraints (AJSP-PIMS). Thus, a mixed-integer linear programming (MILP) model is established to mathematically characterize the problem by minimizing makespan. To efficiently solve this NP-hard problem, an Improved Genetic Algorithm integrating assembly-driven initialization, cascading update operations and critical path-guided variable neighborhood search (GA-ACC) is developed to increase search efficiency. Comprehensive experimental results show significant performance improvements over existing methods in solving AJSP-PIMS problems and substantially enhancing shipbuilding production efficiency.
- New
- Research Article
- 10.3390/math13213428
- Oct 28, 2025
- Mathematics
- Yixuan Zhou + 3 more
The joint optimization of storage location assignment and order picking efficiency for fresh products has become a vital challenge in intelligent warehousing because of the perishable nature of goods, strict temperature requirements, and the need to balance cost and efficiency. This study proposes a comprehensive mathematical model that integrates five critical cost components: picking path, storage layout deviation, First-In-First-Out (FIFO) penalty, energy consumption, and picker workload balance. To solve this NP-hard combinatorial optimization problem, we develop a Particle Swarm-guided hybrid Genetic-Simulated Annealing (PS-GSA) algorithm that synergistically combines global exploration by Particle Swarm Optimization (PSO), population evolution of Genetic Algorithm (GA), and the local refinement and probabilistic acceptance of Simulated Annealing (SA) enhanced with Variable Neighborhood Search (VNS). Computational experiments based on real enterprise data demonstrate the superiority of PS-GSA over benchmark algorithms (GA, SA, HPSO, and GSA) in terms of solution quality, convergence behavior, and stability, achieving 4.08–9.43% performance improvements in large-scale instances. The proposed method not only offers a robust theoretical contribution to combinatorial optimization but also provides a practical decision-support tool for fresh e-commerce warehousing, enabling managers to flexibly weigh efficiency, cost, and sustainability under different strategic priorities.
- New
- Research Article
- 10.3390/app152111499
- Oct 28, 2025
- Applied Sciences
- Haoran He + 5 more
To overcome the limitations of traditional methods in emergency response scenarios—such as limited adaptability during the search process and a tendency to fall into local optima, which reduce the overall efficiency of emergency supply distribution—this study develops a Vehicle Routing Problem (VRP) model that incorporates multiple constraints, including service time windows, demand satisfaction, and fleet size. A multi-objective optimization function is formulated to minimize the total travel time, reduce distribution imbalances, and maximize demand satisfaction. To solve this problem, a hybrid deep reinforcement learning framework is proposed that integrates an Adaptive Large Neighborhood Search (ALNS) with Proximal Policy Optimization (PPO). In this framework, ALNS provides the baseline search, whereas the PPO policy network dynamically adjusts the operator weights, acceptance criteria, and perturbation intensities to achieve adaptive search optimization, thereby improving global solution quality. Experimental validation of benchmark instances of different scales shows that, compared with two baseline methods—the traditional Adaptive Large Neighborhood Search (ALNS) and the Improved Ant Colony Algorithm (IACA)—the proposed algorithm reduces the average objective function value by approximately 23.6% and 25.9%, shortens the average route length by 7.8% and 11.2%, and achieves notable improvements across multiple performance indicators.
- New
- Research Article
- 10.1177/18724981251388888
- Oct 28, 2025
- Intelligent Decision Technologies
- Dimitrios Karapiperis + 3 more
The discipline of Entity Resolution (ER), the process of identifying and linking records that refer to the same real-world entity, has been fundamentally reshaped by the adoption of high-dimensional vector embeddings. This transformation reframes ER as a large-scale Approximate Nearest Neighbor Search (ANNS) problem, making the choice of ANNS architecture a critical determinant of system performance. This paper provides a deep architectural comparison and a novel, large-scale empirical evaluation of the two dominant ANNS paradigms: graph-based methods (HNSW, DiskANN) and partition-based methods (Faiss-IVF+PQ, Scann). We introduce a new semi-synthetic benchmark tailored to the ER task, consisting of two one-million-vector datasets with a known ground truth. On this benchmark, we conduct a comprehensive evaluation, measuring not only total query time but also disaggregated blocking and matching times, alongside canonical ER quality metrics: precision, recall, and F1-score. Our findings reveal that partition-based methods, particularly Scann, offer superior performance in high-throughput, moderate-recall scenarios, while graph-based methods like HNSW and DiskANN are unequivocally superior for applications demanding the highest levels of matching quality. This work provides a nuanced, application-centric analysis that culminates in a set of actionable recommendations for practitioners designing modern data integration and retrieval systems.
- New
- Research Article
- 10.17586/2226-1494-2025-25-5-902-909
- Oct 27, 2025
- Scientific and Technical Journal of Information Technologies, Mechanics and Optics
- N A Tomilov
The modern approach to search textual and multimodal data in large collections involves the transformation of the documents into vector embeddings. To store these embeddings efficiently different approaches could be used, such as quantization, which results in loss of precision and reduction of search accuracy. Previously, a method was proposed that reduces the loss of precision during quantization. In that method, clustering of the embeddings with k -Means algorithm is performed, then a bias, or delta, being the difference between the cluster centroid and vector embedding, is computed, and then only this delta is quantized. In this article a modification of that method is proposed, with a different clustering algorithm, the ensemble of Oblivious Decision Trees. The essence of the method lies in training an ensemble of binary Oblivious Decision Trees. This ensemble is used to compute a hash for each of the original vectors, and the vectors with the same hash are considered as belonging to the same cluster. In case when the resulting cluster count is too big or too small for the dataset, a reclustering process is also performed. Each cluster is then stored using two different files: the first file contains the per-vector biases, or deltas, and the second file contains identifiers and the positions of the data in the first file. The data in the first file is quantized and then compressed with a general-purpose compression algorithm. The usage of Oblivious Decision Trees allows us to reduce the size of the storage compared to the same storage organization with k -Means clustering. The proposed clustering method was tested on Fashion-MNIST-784-Euclidean and NYT-256-angular dataset against the k -Means clustering. The proposed method demonstrates a better compression quality compared to clustering via k -Means, demonstrating up to 4.7 % less storage size for NF4 quantization for Brotli compression algorithm. For other compression algorithms the storage size reduction is less noticeable. However, the proposed clustering algorithm provides a bigger error value compared to k -Means, up to 16 % in the worst-case scenario. Compared to Parquet, the proposed clustering method demonstrates a lesser error value for the Fashion-MNIST-784Euclidean dataset when using quantizations FP8 and NF4. For the NYT-256-angular dataset, compared to Parquet, the proposed method allows better compression for all tested quantization types. These results suggest that the proposed clustering method can be utilized not only for the nearest neighbor search applications, but also for compression tasks, when the increase in the quantization error can be ignored.
- New
- Research Article
- 10.1007/s10878-025-01364-6
- Oct 25, 2025
- Journal of Combinatorial Optimization
- Zhaohui Wang + 5 more
Reinforcement learning-guided adaptive large neighborhood search for vehicle routing problem with time windows
- New
- Research Article
- 10.1080/00207543.2025.2564269
- Oct 23, 2025
- International Journal of Production Research
- Marco Giacomelli + 3 more
High turnover and shortage of order pickers are great concerns in the logistics industry. Manual order picking is physically demanding and impacts workers' well-being. Recently, employers have agreed to adjust picking norm times to improve pickers′ well-being. However, the literature lacks understanding about the impact of picking times on pickers' well-being. We propose a novel approach to order picker planning that considers physical fatigue of workers as a measure of concern for well-being. The model we develop generates batches of orders, assigns and sequences them while considering the impact of picking time on physical fatigue. Crucially, we consider the behaviour of pickers and assume that workers are empowered to take spontaneous breaks when they reach critical fatigue levels. An adaptive large neighbourhood search algorithm is proposed to address the problem and conduct extensive experiments to generate managerial insights. The results of traditional retail scenarios show that imposing strict time targets can harm well-being and hurt picking efficiency. Increasing picking norm times by only 1% can reduce picker fatigue by 10% on average. Furthermore, warehouse design decisions can have an impact on physical fatigue of workers. This research demonstrates the critical need to reevaluate operational strategies and prioritise worker empowerment to achieve sustainable order picking.