Articles published on Curse Of Dimensionality
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
5097 Search results
Sort by Recency
- New
- Research Article
- 10.1142/s2301385027500701
- Feb 6, 2026
- Unmanned Systems
- Pingping Qu + 7 more
Efficient resource allocation for unmanned aerial vehicle (UAV) swarms is a critical challenge, complicated by severe interference between UAV-to-UAV (U2U) and UAV-to-infrastructure (U2I) communications. Traditional Multi-Agent Reinforcement Learning (MARL) methods often prove insufficient in this domain due to two fundamental limitations: the policy sacrifice phenomenon, wherein uncoordinated agent competition leads to suboptimal outcomes, and the curse of dimensionality, which impedes effective learning in large swarms. To address these limitations, this paper proposes the Attention-based and Dynamic Gateway Multi-Agent Soft Actor-Critic (ADG-MASAC), a novel MARL framework. Our approach integrates a dynamic gateway mechanism to convert chaotic competition into structured collaboration via dynamic role assignment and an attention-based critic to enable precise perception of high-dimensional global states. Experimental results demonstrate that ADG-MASAC not only resolves the policy sacrifice issue but also achieves substantial performance gains in both U2U and U2I communications. Ablation studies further confirm that the synergy between these two mechanisms is essential for the algorithm’s success.
- New
- Research Article
- 10.1038/s41598-026-38020-w
- Feb 4, 2026
- Scientific reports
- Peng Gao + 3 more
Ensuring information security heavily relies on high-quality random sequences for encryption keys. Physical entropy sources, despite their use in generating true random sequences, are susceptible to environmental disturbances, necessitating real-time randomness testing to maintain high entropy. However, existing methods for generating test data for real-time randomness testers face significant challenges, including producing sequences that fail to meet specific randomness criteria, constructing borderline sequences with slight non-randomness, and addressing the difficulty of simultaneously violating multiple randomness criteria. This paper introduces a dynamic test data generation framework designed to address these challenges. The framework leverages evolutionary algorithm (EA) to transform the generation of borderline sequences into a multi-constrained optimization problem, where a large language model (LLM) acts as a dynamic parameter adjuster. By analyzing evolutionary trends in population statistics and interacting with evolutionary dynamics through a game-theoretic mechanism, the LLM adaptively adjusts scaling factors and weight coefficients, mitigating the curse of dimensionality in multi-objective optimization and enabling real-time parameter tuning. The experimental results also highlight the high quality of the generated sequences: our approach can generate borderline test data that slightly fail to satisfy the target randomness criteria, yet exhibit statistical properties very similar to those of high-entropy sources under standard test suites. These borderline sequences are fault-detectable and provide challenging, realistic test inputs for classical statistical-test-based real-time randomness testers.
- New
- Research Article
- 10.1145/3795797
- Feb 4, 2026
- ACM Transactions on Design Automation of Electronic Systems
- Gongteng Xiao + 7 more
As semiconductor technology node advances, the number of parameters in the modern device compact model increases drastically. Manual extraction of these model parameters becomes not only tedious but also impossible, and the automatic method is strongly desired. Traditional black-box optimization suffers from poor scalability due to the curse of dimensionality, while deep learning–based methods typically require large amounts of training data. To address these challenges, we propose CDRPE: a combined deep learning and self-attention enhanced reinforcement learning framework for automatically extracting a large set of DCM parameters across multiple electrical characteristics. The framework leverages a pre-trained multilayer perceptron to initialize core parameters, incorporates device physics knowledge to guide the search, and employs a self-attention–enhanced RL agent for efficient exploration in high-dimensional parameter spaces. Experimental results on BSIM4, BSIMSOI, and BSIMCMG demonstrate that CDRPE can automatically extract 100 parameters with root-mean-square error below 5% relative to TCAD and silicon data. Compared with existing methods, the proposed framework achieves a 7.7X speed up. Moreover, the generated models show good convergence in both digital and analog circuit simulations, exhibiting the potential of this framework for future practical applications.
- New
- Research Article
- 10.1063/5.0303986
- Feb 1, 2026
- APL Photonics
- Zi-Mo Cheng + 7 more
Photons with a spiral phase will carry orbital angular momentum (OAM), which can serve as a valuable resource for constructing high-dimensional Hilbert spaces due to its orthogonality and unbounded dimensionality. However, the fast measurement of high-dimensional OAM spectra remains a challenge. While intensity detection is relatively straightforward, full characterization requires phase retrieval. Although quantum state tomography allows full reconstruction, it demands a number of measurements scaling quadratically with the d2 of dimension (d), leading to a “curse of dimensionality” as the dimension d increases. Thus, there is an urgent need for efficient and simple methods for extracting phase information from high-dimensional OAM spectra. Here, we propose a generalized Poincaré sphere model with analog Stokes-like parameters, enabling full phase measurement of a d-dimensional OAM spectrum using only 4d measurements, significantly reducing the data acquisition requirement. We experimentally validated the method by performing intensity and phase measurements on 8-, 16-, and 32-dimensional OAM comb spectra. The results show excellent agreement with theoretical predictions, with measured fidelities ranging from 0.9829 to 0.9965. This approach facilitates an efficient characterization of high-dimensional OAM spectra and contributes to advancing their practical applications.
- New
- Research Article
- 10.3390/jmse14030278
- Jan 29, 2026
- Journal of Marine Science and Engineering
- Gang Yao + 4 more
In the event of a fault in a shipboard medium-voltage direct-current (MVDC) power system, a fault reconfiguration method issues control commands to the switchgear to execute switching actions, thereby redistributing power flow, isolating the faulted zone, and restoring power to the de-energized loads. Existing fault reconfiguration strategies mainly use classical optimization methods. These methods are usually centralized, and as the system scale increases, they suffer from the curse of dimensionality, which degrades real-time performance and reduces computational efficiency. This paper proposes a MADRL-based fault reconfiguration method for shipboard MVDC power systems. The proposed method considers load priority levels, maximizes total restored load, and improves load balancing. To this end, a QMIX-based method, Dependency-Corrected QMIX with Action Masking (Dep-QMIX-Mask), was developed, introducing a dependency correction mechanism to handle action dependencies during decentralized execution and applying action masking to rule out invalid switching actions under operational constraints. A shipboard MVDC power system model was established and used for validation. Across three representative fault cases, Dep-QMIX-Mask achieves served load rates of 0.88, 0.67, and 0.43, with SLR improvements of up to 19.6% over baseline methods. It consistently produces feasible switching sequences in all 20 independent runs per case, improving feasibility by 10 to 30 percentage points. In addition, Dep-QMIX-Mask improves zonal load balancing by reducing the PUR variance by 40.5% to 99.2% compared with baseline methods. These results indicate that Dep-QMIX-Mask can generate feasible sequential reconfiguration strategies while improving both load restoration and load balancing.
- New
- Research Article
- 10.1088/2058-9565/ae3f4d
- Jan 29, 2026
- Quantum Science and Technology
- Youle Wang + 3 more
Abstract Simulating quantum dynamics to extract time-evolving observables constitutes a central challenge in quantum computing, with both fundamental significance and broad practical applications. Classical approaches suffer from the exponential scaling of Hilbert space, while existing quantum algorithms face limitations from deep circuits and sequential error accumulation on near-term devices. This work introduces a physics-informed quantum subspace (PIQS) method for the efficient estimation of dynamical properties of quantum systems. The core innovation is a globally physics-informed loss function that incorporates the time-dependent Schrödinger equation as a physics-based penalty. This enforces quantum evolution constraints directly during optimization, thereby circumventing the error accumulation inherent in stepwise simulations.
By strategically relaxing the normalization constraint, we obtain convexified loss functions whose optimization reduces to solving a linear system, guaranteeing global convergence and significantly mitigating the convergence issues and barren plateaus common in variational quantum algorithms. Theoretically, we prove that under suitable conditions the true dynamical solution can be approximated with high accuracy within a subspace whose dimension scales only as $\mathcal{O}(T\log(1/\varepsilon))$, thus breaking the curse of dimensionality in classical simulation. Numerical experiments demonstrate that the proposed method outperforms conventional Trotterization and variational quantum benchmarks in terms of computational cost, convergence speed, and robustness against measurement noise, offering a viable and efficient pathway for practical dynamical simulation on noisy intermediate-scale quantum hardware.
- New
- Research Article
- 10.1088/2058-9565/ae3e3b
- Jan 27, 2026
- Quantum Science and Technology
- Nikita Guseynov + 2 more
Abstract We propose an explicit quantum framework for numerically simulating general linear partial differential equations (PDEs), extending previous work \cite{guseynov2024efficientPDE} to incorporate (a) Robin boundary conditions—which include Neumann and Dirichlet conditions as special cases—(b) inhomogeneous terms, and (c) variable coefficients in space and time. Our approach begins with a general finite-difference discretization and applies the Schr"{o}dingerisation technique to transform the resulting system into one that admits unitary quantum evolution, enabling quantum simulation.
For the Schr"{o}dinger equation corresponding to the discretized PDE, we construct an efficient block-encoding of the Hamiltonian $H$ that scales polylogarithmically with the number of grid points $N$. This encoding is compatible with quantum signal processing and allows for the implementation of the evolution operator $e^{-iHt}$. The explicit circuit construction in our method permits complexity to be measured in fundamental gate units—namely, CNOT gates and single-qubit rotations—bypassing the inefficiencies of oracle queries. Consequently, the overall algorithm scales polynomially with $N$ and linearly with the spatial dimension $d$. Under certain input/output assumptions our method achieves a polynomial speedup in $N$ and an exponential advantage in $d$ for a wide class of PDEs, thereby mitigating the classical curse of dimensionality. The validity and efficiency of the proposed approach are further substantiated by numerical simulations.
By explicitly defining the quantum operations and quantifying their resource requirements, our approach offers a practical alternative for numerically solving PDEs, distinct from others that rely on oracle queries and purely asymptotic scaling methods.
- Research Article
- 10.1080/01621459.2026.2615850
- Jan 16, 2026
- Journal of the American Statistical Association
- Qixian Zhong + 2 more
Deep learning has become enormously popular in the analysis of complex data, including event time measurements with censoring. To date, deep survival methods have mainly focused on prediction. Such methods are scarcely used in matters of statistical inference such as hypothesis testing. Due to their black-box nature, deep-learned outcomes lack interpretability which limits their use for decision-making in biomedical applications. This paper provides estimation and inference methods for the nonparametric Cox model – a flexible family of models with a nonparametric link function to avoid model misspecification. Here we assume the nonparametric link function is modeled via a deep neural network. To perform statistical inference, we utilize sample splitting and cross-fitting procedures to get neural network estimators and construct test statistic. These procedures enable us to propose a new significance test to examine the association of certain covariates with event times. We establish convergence rates of the neural network estimators, and show that deep learning can overcome the curse of dimensionality in nonparametric regression by learning to exploit low-dimensional structures underlying the data. In addition, we show that our test statistic converges to a normal distribution under the null hypothesis and establish its consistency, in terms of the Type II error, under the alternative hypothesis. Numerical simulations and a real data application demonstrate the usefulness of the proposed test.
- Research Article
- 10.1002/bte2.70084
- Jan 15, 2026
- Battery Energy
- Fatemeh Ebrahimabadi + 2 more
ABSTRACT Active battery balancing is essential for maximizing the performance and safety of lithium‐ion battery packs in electric vehicles and energy storage systems, yet traditional control methods struggle with nonlinear dynamics. This paper investigates the critical role of state‐space design in tabular Q‐learning for controlling switches of a buck‐boost converter in a four‐cell pack, addressing a key gap in the application of reinforcement learning to battery management systems. We propose and compare three novel discrete state representations: a coarse 11‐state pairwise comparison, an intermediate 27‐state hierarchical relational model, and a fine‐grained 81‐state individual deviation model. Through simulations across 1000 training episodes and 24 test scenarios, the 27‐state model achieves superior convergence, with an average balancing time of around 41 timesteps and the lowest performance variance (σ = 12.28). Statistical analysis and state‐transition graphs reveal that this optimal granularity enables hierarchical control strategies, balancing informational richness with learnability to avoid perceptual aliasing and the curse of dimensionality. These findings provide a blueprint for designing efficient RL policies in BMS, which has implications for scalable and real‐time implementations in high‐voltage applications.
- Research Article
- 10.62762/tacs.2025.318429
- Jan 14, 2026
- ICCK Transactions on Advanced Computing and Systems
- Yuqi Lin
As a classical convex optimization problem in geometry, computing the maximum inscribed ball (MaxIB) in ultra-high-dimensional polytopes is critical for enabling real-time IoT applications, such as optimal deployment of sensor networks, where polytopes model physical constraints arising from obstacles or coverage boundaries. However, existing methods suffer from the curse of dimensionality, leading to prohibitive computational costs. This paper develops a more efficient approach for computing the (1-\(\epsilon\))-approximate MaxIB in high-dimensional polytopes. To address these challenges, the problem is reformulated with adaptive penalty parameters to enforce strong convexity, enabling linear convergence under the Pairwise Frank–Wolfe (PFW) algorithm. Furthermore, expensive exact line searches are replaced with a backtracking strategy, significantly reducing the per-iteration computational cost. Simulation results demonstrate more than a 12-fold acceleration over existing approximate MaxIB methods without sacrificing accuracy.
- Research Article
- 10.1016/j.biosystems.2026.105704
- Jan 8, 2026
- Bio Systems
- Ian Todd
Intelligence as high-dimensional coherence: The observable dimensionality bound and computational tractability.
- Research Article
- 10.1111/exsy.70188
- Jan 5, 2026
- Expert Systems
- J Guzmán Figueira‐Domínguez + 2 more
ABSTRACT As benchmark image datasets expand in sample size and feature complexity, the challenge of managing increased dimensionality becomes apparent. Contrary to the expectation that more features equate to enhanced information and improved outcomes, the curse of dimensionality often hampers performance. This paper reviews existing literature on filter feature selection techniques applied to image features, highlighting their use in both classical and deep‐learning‐based feature extraction methods. Building on these findings, this study proposes a scalable approach for image feature extraction and selection using Big Data technologies, specifically Apache Spark, to efficiently process large and high‐dimensional datasets. The proposed framework integrates filter‐based feature selection methods within a distributed environment to evaluate their effectiveness in image analysis tasks. Several experiments were performed to compare the results using feature selection techniques with various reduction percentages. Results show that significant feature reduction can be achieved without compromising classification accuracy, demonstrating the potential of Spark‐based distributed processing for large‐scale image analytics.
- Research Article
- 10.1109/tcyb.2026.3655692
- Jan 1, 2026
- IEEE transactions on cybernetics
- Jin Wang + 3 more
In this article, cycle-time configuration is realized using max-plus algebra for a parallel processing system via a synchronous feedback controller. As a key efficiency metric of parallel processing systems, throughput is determined by cycle time, which is threatened by clock asynchrony and the curse of dimensionality. Using instruction dependency and weak linear independence, the parallel processing system is equivalent to a max-plus nonautonomous system to mitigate the curse of dimensionality caused by numerous processing tasks. Based on the max-plus nonautonomous system, the cycle-time configuration is achieved via a synchronous feedback controller while adhering to time restrictions of the parallel processing system. Numerical simulations validate the effectiveness of the proposed cycle-time configuration in parallel processing systems.
- Research Article
- 10.59717/j.xinn-inform.2025.100025
- Jan 1, 2026
- The Innovation Informatics
- Yuyang Wang + 1 more
<p>The central challenge in materials science and quantum chemistry is solving the electronic Schrodinger equation, complicated by the curse of dimensionality as system size grows. Neural network-based variational Monte Carlo (NN-VMC) offers a promising path forward, achieving unprecedented accuracy at far lower cost than traditional high-level methods. However, the flexibility of neural network wavefunctions introduces a bottleneck: their optimization is high-dimensional, stochastic, and non-convex. This Perspective reviews the evolution of optimization methods in NN-VMC, from stochastic reconfiguration to approximate second-order algorithms and geometric insights. We highlight key challenges currently limiting scalability and efficiency, and outline future opportunities to advance the field. With continued progress in optimization, neural network techniques, and computer architectures, NN-VMC can tackle larger and more complex quantum systems and move from trailing experiments to guiding them.</p>
- Research Article
- 10.3934/fods.2025008
- Jan 1, 2026
- Foundations of Data Science
- Yasuaki Hiraoka + 3 more
Curse of dimensionality on persistence diagrams
- Research Article
- 10.1016/j.saa.2025.126637
- Jan 1, 2026
- Spectrochimica acta. Part A, Molecular and biomolecular spectroscopy
- Jingran Luan + 6 more
Wavelength selection methods for NIR calibration in petrochemicals: Status, characteristics, and scenarios analysis.
- Research Article
- 10.12688/f1000research.173697.1
- Dec 31, 2025
- F1000Research
- Mushtaq K Abdalrahem + 2 more
The fundamental tasks of function approximation and numerical integration on the sphere from scattered data nodes present significant challenges, including the curse of dimensionality, the instability of high-degree polynomial methods, and the limitations of existing approaches that often require specially structured node sets. This paper introduces a novel framework that seamlessly integrates Weighted Least Squares (WLS) polynomial approximation, using spherical harmonics and Voronoi-based weights, with a consensus-based optimization strategy for decentralized computation, specifically designed for large, distributed datasets. Our results show that this method provides stable and accurate approximation even with high-degree polynomials on arbitrary, unstructured node sets, ensuring a well-conditioned system. It also leads to the construction of a novel quadrature rule with provably positive weights, guaranteeing stability for numerical integration. We derive rigorous theoretical error bounds that explicitly connect the accuracy of the method to the density of the node set and the polynomial degree. Extensive numerical experiments confirm that our framework outperforms standard least squares and classical scattered-data quadrature rules in both stability and accuracy. We conclude that this consensus-based WLS framework offers a robust, scalable, and distributed solution for a wide range of spherical problems, with significant potential impact in scientific computing and data analysis.
- Research Article
- 10.1007/s10994-025-06953-4
- Dec 29, 2025
- Machine Learning
- Ki Joung Jang + 1 more
Abstract Conditional Generative Adversarial Networks (cGANs) provide a flexible framework for learning conditional distributions, but they suffer from two fundamental challenges: the scarcity of conditional samples and the curse of dimensionality in the image space. In this work, we address both issues by introducing Vicinal Estimation (VE) into the cGAN framework and analyzing it through Barron-space discriminators. VE alleviates the lack of conditional samples by coupling nearby labels via an auxiliary sampling distribution, effectively transforming the problem into a collection of unconditional GANs in the vicinal label space. Meanwhile, the Barron-space analysis yields a dimension-independent generalization bound that holds irrespective of the image dimension, and we show how this bound transfers from VE conditionals back to the original conditional distributions. We develop VE-cGAN , a practical instantiation of this idea, and demonstrate through experiments on benchmark datasets that it achieves improved perceptual quality and label consistency compared with baselines. Our theoretical and empirical findings together highlight VE as a principled and effective approach to overcoming the lack of conditional samples and the curse of dimensionality in conditional generative modeling.
- Research Article
- 10.3390/axioms15010022
- Dec 27, 2025
- Axioms
- Matieyendou Lamboni
This study proposes a unified stochastic framework for approximating and computing the gradient of every smooth function evaluated at non-independent variables, using ℓp-spherical distributions on Rd with d,p≥1. The upper-bounds of the bias of the gradient surrogates do not suffer from the curse of dimensionality for any p≥1. Additionally, the mean squared errors (MSEs) of the gradient estimators are bounded by K0N−1d for any p∈[1,2], and by K1N−1d2/p when 2≤p≪d with N the sample size and K0,K1 some constants. Taking max2,log(d)<p≪d allows for achieving dimension-free upper-bounds of MSEs. In the case where d≪p<+∞, the upper-bound K2N−1d2−2/p/(d+2)2 is reached with K2 a constant. Such results lead to dimension-free MSEs of the proposed estimators, which boil down to estimators of the traditional gradient when the variables are independent. Numerical comparisons show the efficiency of the proposed approach.
- Research Article
- 10.3390/s26010181
- Dec 26, 2025
- Sensors (Basel, Switzerland)
- Dadong Ni + 5 more
In recent years, with the rapid development of intelligent communication technologies, anti-jamming techniques based on deep learning have been widely adopted in unmanned aerial vehicle (UAV) systems, yielding significant improvements. Most existing studies primarily focus on intelligent anti-jamming decision-making for single UAVs. However, in UAV swarm systems, single-agent decision models often suffer from data isolation and inconsistent frequency usage decisions among nodes within the same task subnet, caused by asynchronous model updates. Although data sharing among UAVs can partially alleviate model update issues, it introduces significant communication overhead and data security challenges. To address these problems, this paper proposes a novel multi-UAV cooperative intelligent anti-jamming decision-making method, termed Federated Learning-Hierarchical Deep Q-Network (FL-HDQN). First, an adaptive model synchronization mechanism is integrated into the federated learning framework. By sharing only local model parameters instead of raw data, UAVs collaboratively train a global model for each task subnet. This approach ensures decision consistency while preserving data privacy and reducing communication costs. Second, to overcome the curse of dimensionality caused by multi-domain interference parameters, a hierarchical deep reinforcement learning model is designed. The model decouples multi-domain optimization into two levels: the first layer performs time-frequency domain decisions, and the second layer conducts power and modulation-coding domain decisions, ensuring both real-time performance and decision effectiveness. Finally, simulation results demonstrate that, compared with state-of-the-art intelligent anti-jamming models, the proposed method achieves 1% higher decision accuracy, validating its superiority and effectiveness.