Published in last 50 years
Articles published on Large-scale Environment
- New
- Research Article
- 10.34148/teknika.v14i3.1289
- Nov 3, 2025
- Teknika
- Gabriella Youzanna Rorong + 2 more
The escalating volume and often irregular structure of social assistance data pose significant challenges for efficient data retrieval in management systems. Traditional search algorithms, such as linear and binary search, frequently encounter limitations when handling these large-scale datasets. This research conducts a comparative study between two hybrid algorithms, Jump Binary Search (JBS) and Interpolation Extrapolation Search (IES), aiming to identify the most effective method for a web-based social assistance data management system. Evaluations were performed on a dataset comprising 480 names of social assistance recipients, measuring the number of iterations, execution time, and search accuracy. The results demonstrate IES's superiority over JBS in both iteration efficiency and execution speed. IES exhibited an execution time ranging from 0.002 to 0.006 ms, whereas JBS had an execution time ranging from 0.015 to 0.039 ms. Based on these findings, IES was successfully implemented into a Laravel-based application utilizing a MySQL database. This system is capable of executing searches in less than one second per request. This implementation significantly enhances the system's adaptability and provides an effective search solution for dynamic, large-scale data environments, offering rapid and efficient access to data.
- New
- Research Article
- 10.1016/j.neunet.2025.107781
- Nov 1, 2025
- Neural networks : the official journal of the International Neural Network Society
- Haoran Wang + 4 more
HG2P: Hippocampus-inspired high-reward graph and model-free Q-gradient penalty for path planning and motion control.
- New
- Research Article
- 10.1016/j.patcog.2025.111765
- Nov 1, 2025
- Pattern Recognition
- Yichao Cao + 3 more
SmokeAgent: Multimodal agent for fine-grained smoke event analysis in large-scale wild environments
- New
- Research Article
- 10.1016/j.adhoc.2025.103974
- Nov 1, 2025
- Ad Hoc Networks
- Saugata Roy + 2 more
A multi-depot provisioned UAV swarm trajectory optimization scheme for collaborative data acquisition in a large-scale IoT environment
- New
- Research Article
- 10.70849/ijsci02102025142
- Oct 30, 2025
- International Journal of Sciences and Innovation Engineering
- Dr Deepak Tomar
AI-powered security for 5G and 6G communication networks is poised to revolutionize the protection of next-generation wireless infrastructures by leveraging advanced artificial intelligence and machine learning techniques to address complex and evolving threats. The rapid proliferation of connected devices, network virtualization, and distributed edge intelligence in 5G and 6G environments creates unprecedented vulnerabilities and a much broader attack surface, necessitating autonomous and robust solutions. AI-driven security frameworks offer adaptive intrusion and anomaly detection, automated incident response, privacy-preserving mechanisms, and predictive analytics that go far beyond traditional security approaches, enabling real-time threat mitigation across heterogeneous, large-scale environments. These networks also face unique challenges such as adversarial attacks on AI models, data poisoning, and the need for explainable, trustworthy AI to ensure compliance and operational resilience. The integration of federated learning, reinforcement learning, and zero-trust architectures demonstrates how AI can support scalable, dynamic, and transparent security operations, while future research will continue to expand the capabilities and address ethical, policy, and quantum-resistance issues. Ultimately, the fusion of AI and advanced communication technologies promises to secure digital societies with proactive, intelligent, and holistic defense strategies tailored for the demands of 5G and 6G networks.
- New
- Research Article
- 10.1080/10095020.2025.2570025
- Oct 30, 2025
- Geo-spatial Information Science
- Ning Zhou + 3 more
ABSTRACT With the expansion of human activities into mountainous regions, jungles, deserts, rural roads, and off-road terrains, there is an increasing demand for reliable navigation and location services in wild unstructured environments. Unlike urban environments, where established methodologies exist for constructing navigation maps, roadless unstructured environments lack comprehensive frameworks for navigation map construction. The conventional waypoint-based structures, which are well-suited to urban environments, are often ill-suited to the expansive and unstructured nature of field regions. Moreover, the absence of predefined user paths necessitates a fundamentally different approach to map construction. To address this challenge, we propose a novel method for generating navigation maps in unstructured roadless environment. New methodology leverages remote sensing as input for environmental perception, translating user traversability into geographical parameters and constructing a navigation mesh as a computational map. Compared with existing methods, the new method can adapt to large-scale unstructured environments. To validate the proposed method, reliability sampling tests and operational experiments were conducted on traversable area delineation and the navigation map generation. The experimental results indicate an 84% accuracy in traversable area analysis, with the constructed navigation map effectively supporting positioning and path planning while significantly reducing computational complexity in large-scale path planning tasks. The navigation mesh generated through this method effectively enhances the implementation of navigation and positioning services in off-road and roadless environments. The proposed method facilitates the construction of navigation maps capable of delivering navigation and localization services without requiring human presence in the target area.
- New
- Research Article
- 10.3390/electronics14214235
- Oct 29, 2025
- Electronics
- Hayong Jeong + 3 more
In modern high-performance computing (HPC) and large-scale data processing environments, the efficient utilization and scalability of memory resources are critical determinants of overall system performance. Architectures such as non-uniform memory access (NUMA) and tiered memory systems frequently suffer performance degradation due to remote accesses stemming from shared data among multiple tasks. This paper proposes LACX, a shared data migration technique leveraging Compute Express Link (CXL), to address these challenges. LACX preserves the migration cycle of automatic NUMA balancing (AutoNUMA) while identifying shared data characteristics and migrating such data to CXL memory instead of DRAM, thereby maximizing DRAM locality. The proposed method utilizes existing kernel structures and data to efficiently identify and manage shared data without incurring additional overhead, and it effectively avoids conflicts with AutoNUMA policies. Evaluation results demonstrate that, although remote accesses to shared data can degrade performance in low-tier memory scenarios, LACX significantly improves overall memory bandwidth utilization and system performance in high-tier memory and memory-intensive workload environments by distributing DRAM bandwidth. This work presents a practical, lightweight approach to shared data management in tiered memory environments and highlights new directions for next-generation memory management policies.
- New
- Research Article
- 10.1088/1361-6501/ae1858
- Oct 28, 2025
- Measurement Science and Technology
- Zhiqin Zhang + 5 more
Abstract Insulators are critical components in transmission lines. Common defects, such as structural loss of the insulator caused by spontaneous rupture, breakage, and fouling can lead to short circuits and tripping faults, posing serious threats to power grid stability and the safety of the power supply. However, in practical applications, insulator defect detection faces several challenges, including small target sizes, insufficient representation of multiscale features, complex backgrounds, and imbalanced datasets with a limited number of defective samples. Traditional detection methods often struggle with missed detections of small targets and lack robustness in scenarios with large-scale variations and complex environments. To address these issues, this paper proposes an enhanced detection model based on YOLOv8s. The model introduces an Iterative Attentional Feature Fusion (iAFF) module to optimize multiscale feature representation and incorporates a Generalized Dynamic Feature Pyramid Network (GDFPN) to improve feature retention for small target detection, thereby enhancing robustness in complex backgrounds. Additionally, to mitigate the problem of limited defective sample data, the Stable Diffusion generative model is utilized to augment the dataset, effectively improving detection performance in small-sample scenarios. Experimental results demonstrate that the proposed method significantly outperforms the original YOLOv8s model in terms of recall, accuracy, and precision on the insulator defect dataset. The model exhibits strong detection capabilities and generalization performance, making it well-suited for real-world challenges such as small targets, multiscale variation, and complex backgrounds.
- New
- Research Article
- 10.1051/0004-6361/202557135
- Oct 27, 2025
- Astronomy & Astrophysics
- Daye Lim + 9 more
Small-scale extreme-ultraviolet (EUV) transient brightenings are observationally abundant and critically important to investigate. Determining whether they share the same physical mechanisms as larger-scale flares would have significant implications for the coronal heating problem. A recent study has revealed that quasi-periodic pulsations (QPPs), a common feature in both solar and stellar flares, could also be present in EUV brightenings in the quiet Sun (QS). We aim to characterise the properties of EUV brightenings and their associated QPPs in both QS and active regions (ARs) using an unprecedented 1 s cadence observations from Solar Orbiter’s Extreme Ultraviolet Imager (Solar Orbiter/EUI). We applied an automated detection algorithm to analyse statistical properties of EUV brightenings. The QPPs were identified using complementary techniques optimised for both stationary and non-stationary signals, including a Fourier-based method, ensemble empirical mode decomposition, and wavelet analysis. Over 500 000 and 300 000 brightenings were detected, respectively, in ARs and QS regions. Brightenings with lifetimes shorter than 3 s were detected, demonstrating the importance of high temporal resolution. The QPP occurrence rates were approximately 11% in AR brightenings and 9% in QS brightenings, with non-stationary QPPs being more common than stationary ones. The QPP periods range from 5 to over 500 s and display similar distributions between the ARs and QS regions. Moderate linear correlations were found between QPP periods and the lifetime and spatial scale of the associated brightenings, while no significant correlation was found with peak brightness. We found a consistent power-law scaling, with a weak correlation and a large spread, between QPP period and lifetime in EUV brightenings, solar, and stellar flares. The results support the interpretation that EUV brightenings may represent a small-scale manifestation of the same physical mechanisms driving larger solar and stellar flares. Furthermore, the similarity in the statistical properties of EUV brightenings and their associated QPPs between ARs and QS regions suggests that the underlying generation mechanisms might not strongly depend on the large-scale magnetic environment.
- New
- Research Article
- 10.1115/1.4070173
- Oct 18, 2025
- Journal of Dynamic Systems, Measurement, and Control
- Jiajun Shen + 3 more
Abstract This paper explores the complex behavior of advanced persistent threat (APT) attacks, characterized by a dual threat: the sophisticated manipulation of adversarial disturbance inputs and the exacerbation of system vulnerabilities due to environmental uncertainties. To address these security concerns in large-scale multi-agent industrial cyber-physical systems (CPSs), we develop a decentralized control framework using mean-field game (MFG) theory with multiplicative noise in the dynamics. Our approach effectively tackles the scalability challenges inherent in large-scale environments while countering both intelligent adversarial disturbances and operational uncertainties. By designing resilient and robust decentralized controllers, we ensure system stability and convergence, even under worst-case disturbance inputs. We prove that the mean-field approximation accurately captures the system's collective behavior, and the proposed decentralized controllers achieve ∈-Nash equilibrium. Numerical experiments, inspired by the Ukraine power grid attack, demonstrate the effectiveness of the proposed control strategy.
- New
- Research Article
- 10.1177/02783649251379517
- Oct 16, 2025
- The International Journal of Robotics Research
- Yan-Shuo Li + 1 more
The multi-robot search problem is challenging since it involves task allocation, minimal routing, and maximal coverage problems, which are NP-hard. To solve this problem with theoretical guarantees, it is reformulated as a maximal coverage problem subject to the intersection of matroid constraints. The coverage problem is solved by utilizing its submodularity. Additionally, the workload balance is considered to enhance search efficiency. The intersection matroid is composed of a routing constraint and a clustering constraint. The proposed algorithm, Multi-Robot Search with Matroid constraints (MRSM), achieves ( 1 / 3 ) O P T ˜ , where O P T ˜ is the optimal performance under spanning-tree structures. Furthermore, Dynamic MRSM (D-MRSM) and MRSM with Hexagonal Packing (MRSM-Hex) are proposed for unknown and large-scale environments, respectively. The experiment results show that the MRSM approaches outperform state-of-the-art methods in terms of expected time to detection in multi-robot search problems and scale effectively for large search spaces.
- New
- Research Article
- 10.1051/0004-6361/202554872
- Oct 15, 2025
- Astronomy & Astrophysics
- Gourab Giri + 6 more
The persistence of radiative signatures in giant radio galaxies (GRGs ≳ 700 kpc) remains a frontier topic of research, with contemporary telescopes revealing intricate features that require investigation. This study aims to examine the emission characteristics of simulated GRGs, and correlate them with their underlying three-dimensional dynamical properties. Sky-projected continuum and polarization maps at 1 GHz were computed from five 3D relativistic magnetohydrodynamical (RMHD) simulations by integrating the synthesized emissivity data along the line of sight, with the integration path chosen to reflect the GRG evolution in the sky plane. The emissivities were derived from these RMHD simulations, featuring FR-I and FR-II jets injected at different locations of the large-scale environment and with propagation along varying jet frustration paths. Morphologies, such as widened lobes from low-power jets and collimated flows from high-power jets, are strongly shaped by the triaxiality of the environment, resulting in features such as wings and asymmetric cocoons, thereby making morphology a crucial indicator of GRG formation mechanisms. The decollimation of the bulk flow in GRG jets gives rise to intricate cocoon features, most notably filamentary structures—magnetically dominated threads with lifespans of a few mega-year. High jet power cases frequently display enhanced emission zones at mid-cocoon distances (alongside warmspots around the jet head), contradicting the interpretations of the GRG as a restarting source. In such cases, examining the lateral intensity variation of the cocoon may reveal the source's state, with a gradual decrease in emission suggesting a low active stage. This study highlights that applying a simple radio power–jet power relation to a statistical GRG sample is unfeasible, as it depends on growth conditions of individual GRGs. Effects such as inverse-Compton cooling due to cosmic microwave background photons and matter entrainment significantly impact the long-term emission persistence of GRGs. The diminishing fractional polarization with GRG evolution reflects increasing turbulence, underscoring the importance of modeling this characteristic further, particularly for even larger-scaled sources.
- New
- Research Article
- 10.7717/peerj-cs.3028
- Oct 14, 2025
- PeerJ Computer Science
- Ramamoorthy Sriramulu + 2 more
This article introduces a hybrid approach to enhance indoor pathfinding and navigation within complex multistory environments by integrating rapidly-exploring random tree (RRT)-Connect and Dijkstra’s algorithm. We propose a novel solution leveraging the strengths of RRT-connect for rapid path generation, combined with Dijkstra’s algorithm for refining and optimizing the final route. Our method leverages the rapid exploration of RRT—Connect while refining paths using Dijkstra’s algorithm, resulting in fewer nodes explored compared to Lazy Theta* while maintaining efficiency. Experimental results demonstrate that our hybrid approach significantly reduces computational overhead, with RRT-Connect exploring approximately 1,750 nodes—outperforming RRT (2,000 nodes), RRT* (1,850 nodes), and Dijkstra (1,780 nodes). The algorithm achieves up to 50% faster execution in narrow spaces compared to traditional RRT, making it well-suited for real-time navigation. Additionally, parallel processing optimizes performance, ensuring efficient pathfinding in dynamic environments. A Next.js-based frontend visualization system further enhances usability by rendering path nodes in real time. This hybrid approach balances rapid exploration, optimal path computation, and computational efficiency, making it a robust solution for indoor navigation in large-scale and complex environments.
- New
- Research Article
- 10.1007/s00170-025-16739-6
- Oct 11, 2025
- The International Journal of Advanced Manufacturing Technology
- Álvaro Sáinz De La Maza García + 2 more
Abstract Large-scale machining centres play a critical role in aerospace manufacturing, where tight tolerances are required for components made from materials with limited machinability. During operation, ambient temperature fluctuations—combined with heat generated by motors, guides, and moving axes—induce thermal expansion or contraction of structural elements. These deformations lead to deviations of the cutting tool from its nominal position, resulting in volumetric inaccuracies. While thermal effects are present in all machine tools, large machines are especially prone to asymmetric, spatially localised deformations, which complicate compensation strategies. This study proposes an experimentally validated methodology to isolate and quantify the thermal influence of individual heat sources on volumetric errors in a large five-axis machining centre. The method integrates a dense in-situ sensor network (comprising IDS and thermocouples) with targeted heating sequences and rapid artefact-based validation. The results identify critical sensors and optimal placement for error estimation, validate the feasibility of linear superposition under multi-axis heating, and highlight asymmetric deformation effects. The approach enables efficient model simplification and offers practical guidance for thermal compensation in large-scale industrial machining environments.
- Research Article
- 10.3390/electronics14193959
- Oct 8, 2025
- Electronics
- Ranran Wei + 1 more
Information announcement is the process of propagating and synchronizing the information of Computing Resource Nodes (CRNs) within the system of the Computing Networks. Accurate and timely acquisition of information is crucial to ensuring the efficiency and quality of subsequent task scheduling. However, existing announcement mechanisms primarily focus on reducing communication overhead, often neglecting the direct impact of information freshness on scheduling accuracy and service quality. To address this issue, this paper proposes a hierarchical and clustering-based announcement mechanism for the wide-area Computing Networks. The mechanism first categorizes the Computing Network Nodes (CNNs) into different layers based on the type of CRNs they interconnect to, and a top-down cross-layer announcement strategy is introduced during this process; within each layer, CNNs are further divided into several domains according to the round-trip time (RTT) to each other; and in each domain, inspired by the “Six Degrees of Separation” concept from social propagation, a RTT-aware fast clustering algorithm canopy is employed to partition CNNs into multiple overlap clusters. Intra-cluster announcements are modeled as a Traveling Salesman Problem (TSP) and optimized to accelerate updates, while inter-cluster propagation leverages overlapping nodes for global dissemination. Experimental results demonstrate that, by exploiting shortest path optimization within clusters and overlapping-node-based inter-cluster transmission, the mechanism is significantly superior to the comparison scheme in key indicators such as convergence time, Age of Information (AoI), and communication data volume per hop. The mechanism exhibits strong scalability and adaptability in large-scale network environments, providing robust support for efficient and rapid information synchronization in the Computing Networks.
- Research Article
- 10.3390/app151910758
- Oct 6, 2025
- Applied Sciences
- Youngoh Kwon + 2 more
In large-scale e-commerce, recommendation systems must overcome the shortcomings of conventional models, which often struggle to convert user interest into purchases. This study proposes a revenue-driven recommendation approach that explicitly incorporates user price sensitivity. This study introduces a hybrid recommendation engine that combines collaborative filtering (CF), best match 25 (BM25) for textual relevance, and a price-similarity algorithm. The system is deployed within a scalable three-tier architecture using Elasticsearch and Redis to maintain stability under high-traffic conditions. The system’s performance was evaluated through a large-scale A/B test against both a CF-only model and a popular-item baseline. Results showed that while the CF-only model reduced revenue by 5.10%, our hybrid system increased revenue by 5.55% and improved click-through rate (CTR) by 2.55%. These findings demonstrate that integrating price similarity is an effective strategy for developing commercially viable recommendation systems that enhance both user engagement and revenue growth on large online platforms.
- Research Article
- 10.1007/s00382-025-07879-2
- Oct 1, 2025
- Climate Dynamics
- K N Uma + 1 more
Characteristics of monsoon convection and its interaction with the large-scale environment over the gateway of the Indian summer monsoon: insights from radar observations and reanalysis
- Research Article
- 10.23939/ictee2025.02.083
- Oct 1, 2025
- Information and communication technologies, electronic engineering
- V Solohub + 1 more
In today’s environment of rapidly growing data volumes, information systems must not only provide storage and access to massive datasets but also maintain stable performance when handling diverse query types. A critical challenge lies in balancing the efficiency of analytical (OLAP) operations with the responsiveness of transactional (OLTP) processes. Traditional approaches to data organization in relational DBMS often lose effectiveness in large-scale environments, resulting in longer processing times, reduced flexibility, and greater complexity in database management. This underscores the importance of developing new partitioning optimization methods capable of ensuring both high performance and scalability in hybrid information systems. This article investigates existing data partitioning methods in information systems designed to handle large volumes of structured information while simultaneously serving OLAP and OLTP workloads. The mechanisms of table partitioning in modern DBMSs are analyzed, and the strengths and limitations of each approach are identified with respect to performance, scalability, and ease of data management. A combined partitioning method (range + list) is proposed, tailored for hybrid information systems that concurrently process analytical and transactional workloads. Unlike traditional approaches, the proposed method not only applies combined partitioning for analytical tasks but also provides a comprehensive evaluation of its impact on query performance and transaction processing speed. The results demonstrate that the developed method achieves a balance between OLAP and OLTP performance, enhances scalability and flexibility of information systems, and can be considered a universal approach to managing large-scale data. To conduct the study, a unified simulation model of data processing was built using a star schema with a fact sales table, supporting both business analytics queries and transactional CRUD operations. Experimental findings confirm that the combined partitioning approach reduces analytical query execution time by 30–40% without significant degradation of CRUD performance, making it an effective tool for improving the performance of large-scale information systems.
- Research Article
- 10.3390/s25196050
- Oct 1, 2025
- Sensors (Basel, Switzerland)
- Jeongmin Kang
Accurate and reliable vehicle localization is essential for autonomous driving in complex outdoor environments. Traditional feature-based visual-inertial odometry (VIO) suffers from sparse features and sensitivity to illumination, limiting robustness in outdoor scenes. Deep learning-based optical flow offers dense and illumination-robust motion cues. However, existing methods rely on simple bidirectional consistency checks that yield unreliable flow in low-texture or ambiguous regions. Global navigation satellite system (GNSS) measurements can complement VIO, but often degrade in urban areas due to multipath interference. This paper proposes a multi-sensor fusion system that integrates monocular VIO with GNSS measurements to achieve robust and drift-free localization. The proposed approach employs a hybrid VIO framework that utilizes a deep learning-based optical flow network, with an enhanced consistency constraint that incorporates local structure and motion coherence to extract robust flow measurements. The extracted optical flow serves as visual measurements, which are then fused with inertial measurements to improve localization accuracy. GNSS updates further enhance global localization stability by mitigating long-term drift. The proposed method is evaluated on the publicly available KITTI dataset. Extensive experiments demonstrate its superior localization performance compared to previous similar methods. The results show that the filter-based multi-sensor fusion framework with optical flow refined by the enhanced consistency constraint ensures accurate and reliable localization in large-scale outdoor environments.
- Research Article
- 10.1016/j.vacuum.2025.114488
- Oct 1, 2025
- Vacuum
- Lifang Li + 4 more
Development and characterization of a large-scale high-vacuum environment simulation device with ten-kilogram-scale micron-sized lunar dust