Articles published on Time complexity
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
37302 Search results
Sort by Recency
- New
- Research Article
1
- 10.1016/j.neunet.2025.108331
- Apr 1, 2026
- Neural networks : the official journal of the International Neural Network Society
- Xiang-Jun Shen + 8 more
SVE-Former: A fast fourier transformer via singular vector embedding.
- New
- Research Article
- 10.1016/j.wasman.2026.115444
- Apr 1, 2026
- Waste management (New York, N.Y.)
- Erik Mihelič + 2 more
Enhancing low-temperature isothermal convective drying of waste municipal sewage sludge with wood-derived biochar in sequential drying cycles.
- New
- Research Article
- 10.1016/j.yebeh.2026.110939
- Apr 1, 2026
- Epilepsy & behavior : E&B
- Martin T Lutz + 3 more
Knowledge and attitudes toward ketogenic dietary therapy in adults with epilepsy.
- New
- Research Article
- 10.1016/j.drugalcdep.2026.113079
- Apr 1, 2026
- Drug and alcohol dependence
- Kathryn J Byrd + 6 more
Neural reward sensitivity and longitudinal patterns of alcohol and cannabis use in college-aged youth.
- New
- Research Article
- 10.1016/j.jad.2025.120857
- Apr 1, 2026
- Journal of affective disorders
- Aleksandr Karnick + 8 more
Forecasting turbulence: Evidence of affective projection biases in momentary predictive fluctuations using dynamic structural equation modelling.
- Research Article
- 10.1007/s12539-026-00819-6
- Mar 13, 2026
- Interdisciplinary sciences, computational life sciences
- Jian Zhang + 2 more
As a product of cellular metabolic activity, the level change of metabolites is closely related to the occurrence and development of diseases, so the prediction of metabolite-disease association is a key issue in biomedical research. Traditional methods face the challenges of insufficient long-range dependency modeling and poor interpretability. To address these challenges, we propose a dual-path dynamic contrastive learning framework integrating graph neural networks (GNN) and Mamba architectures, enhanced by fast Kolmogorov-Arnold networks (FastKAN) for metabolite-disease association prediction (GMC-DMA). First, we construct a multi-source heterogeneous network that contains similarity and known associations. Then, the residual graph convolutional Network (ResGCN) is designed to capture the local topological features, and the Mamba architecture is introduced to establish the selective state space model (SSM), which deals with the global dependency with linear time complexity and eliminates the over-smoothing problem of message passing. Then, the InfoNCE loss function is used to implement cross-modal contrast learning, and the sample imbalance problem is solved by the dynamic negative sampling strategy. Finally, the bilinear decoder enhanced by FastKAN outputs the correlation probability. A large number of experimental results show that the comprehensive performance of GMC-DMA is significantly better than that of the baseline methods, proving its effectiveness in predicting disease-related metabolites. In addition, the case studies have also confirmed that GMC-DMA has good reliability in discovering potential metabolites.
- Research Article
- 10.1177/15578666261421965
- Mar 10, 2026
- Journal of computational biology : a journal of computational molecular cell biology
- Joyanta Basak + 8 more
The Jaro and Jaro-Winkler similarity measures are fundamental tools for character-based string comparison, with widespread use in applications such as record linkage, entity resolution, and natural language processing. Although their accuracy in capturing typographical and transpositional errors has made them popular, traditional implementations suffer from high computational cost, especially when applied to large datasets. Previously, we proposed a Jaro similarity algorithm that reduces the time complexity from quadratic to linear. The proposed linear time algorithm can compute the Jaro similarity between two strings significantly faster if the strings are sufficiently long. In this article, we introduce enhanced algorithms for computing both Jaro and Jaro-Winkler similarity that improve the runtime, including in handling shorter strings. Furthermore, we propose some techniques to drastically reduce the computing time for the case where a set of strings is repeatedly compared among themselves, making the algorithms particularly well-suited for large-scale record linkage tasks.
- Research Article
- 10.1088/1361-6501/ae49b1
- Mar 5, 2026
- Measurement Science and Technology
- Yingjie Deng + 6 more
Abstract Sunlight algorithm [1] is a novel sampling-based path planning method that we originally introduced for planar mobile robots, such as unmanned surface vehicles (USVs) and autonomous ground vehicles (AGVs). It performs the efficient sampling by simulating the sunlight radiation, ensuring a time complexity of O(n) compared with O(nlogn) for traditional RRT* (Rapidly-exploring Random Tree Star), which makes it competitive in complex mazes. However, the sunlight algorithm has two crucial drawbacks: (1) sampling at equal intervals of light ray angles can hardly search the optimal path in the scenario with dense small obstacles; (2) the time complexity of O(n) holds based on the assumption of limited size of Openset, which lacks rigorous evidence in some cases. To overcome these drawbacks, this paper presents two improvements on the sunlight algorithm. First, an adaptive sampling compensation mechanism is established to ensure the uniformity of sampling and prevent the omission of key tangent points lying along the edges of small obstacles. Second, a strict filtering mechanism is established for adding the sampling candidates to Openset, which ensures only the optimal waypoint exists in its visible zone. By adopting the two mechanisms, the sunlight algorithm is granted the excellent searching capability in the environment with dense small obstacles without degrading the computational complexity. The proposed scheme is tested in both simulation and robot field experiment, which is deployed on an Ackermann mobile robot. The results demonstrate its exceptional effectiveness in navigating cluttered spaces, along with the fastest search speed and shortest path length compared to existing techniques.
- Research Article
- 10.1364/optcon.589904
- Mar 3, 2026
- Optics Continuum
- Luis Ordóñez + 3 more
In this paper, we propose an all-diffractive, direct method for calibrating the phase response of liquid crystal on silicon spatial light modulators. This approach uses a single-phase mask comprising an array of binary phase Fresnel lenses, each of which samples a specific gray level across the full available range. Consequently, the complete phase calibration curve can be derived from a single recorded irradiance pattern, significantly reducing acquisition time and experimental complexity compared to conventional sequential phase calibration techniques, which require repeated measurements and additional optical components. The phase calibration method demonstrates accurate results under continuous-wave laser illumination and remains effective under periodic laser power oscillations (∼6.8 Hz). Thanks to its efficiency and adaptability, it enables rapid calibration and optimization under varying illumination conditions, making it particularly valuable for multispectral imaging and optical metrology applications.
- Research Article
- 10.1177/09544062261420851
- Mar 2, 2026
- Proceedings of the Institution of Mechanical Engineers, Part C: Journal of Mechanical Engineering Science
- Xuejian Zhang + 8 more
An intelligent docking-and-gripping framework for composite robots, based on Material Partitioning via Virtual Shelves (MPVS) and Mutation-Based Coverage Optimization (MBCO), is developed to overcome the limitations of traditional static workstations and single-task mappings in high-mix, high-complexity warehouse environments. Multi-dimensional and disordered material distributions are reorganized by MPVS into an ordered one-dimensional representation along the aisle centerline, thereby reducing the complexity of docking-and-gripping planning. The docking–workstation planning task is formulated as a coverage problem on a specified plane. Docking positions are then optimized by a multi-objective method that integrates a greedy maximum-coverage-circle heuristic with MBCO to cope with dynamically varying inventories. Across diverse scenarios and material distributions, docking frequency is reduced by 58.3% relative to unplanned schemes while 100% coverage of target items is preserved. Compared with an initial greedy solution, docking frequency is further reduced by 37.5%, and planning time is decreased in comparison with alternative algorithms. In large-scale cases with up to 2205 items, MBCO maintains sublinear planning time below 0.2 s on average and reduces the mean number of docking workstations by about 20% compared with a pure greedy strategy. The spatial relationship between robot docking stations and the manipulator workspace is further analyzed, and seventh-order polynomial time allocation refined by a weight-normalized ant colony optimization (ACO) algorithm is employed for joint-trajectory optimization, enhancing tracking accuracy, improving motion smoothness, and reducing energy consumption, while a composite trajectory cost is lowered by more than 30% under a balanced weight setting. The feasibility and effectiveness of the composite-robot storage–retrieval system in dynamic industrial environments are validated experimentally in realistic 3C-component warehousing scenarios.
- Research Article
- 10.3390/astronautics1010008
- Mar 2, 2026
- Astronautics
- Yilin Zou + 1 more
Direct collocation transcription is a dominant technique for solving complex optimal control problems, converting continuous dynamics into large-scale, sparse nonlinear programming problems. The computational efficiency of this approach is fundamentally limited by the evaluation of first- and second-order derivatives required by modern optimization algorithms. While general-purpose automatic differentiation tools exist, they often fail to fully exploit the repetitive substructure inherent in trajectory discretization. This paper presents a vectorized, sparse, second-order forward automatic differentiation framework specifically tailored for direct collocation methods. By explicitly distinguishing between scalar and vector nodes within the expression graph, the proposed method leverages the independence of mesh point evaluations to enable Single Instruction, Multiple Data (SIMD) execution and optimize memory access patterns. This structure-aware approach ensures linear time complexity with respect to the number of discretization nodes while maintaining the flexibility to handle complex dependencies. The methodology is implemented in the open-source software package pockit and is validated through three distinct engineering case studies: the aggressive stabilization of a nano-quadrotor, the powered descent guidance of a reusable launch vehicle, and a low-thrust heliocentric orbital transfer. These applications demonstrate the framework’s capability to deliver high-performance derivative computation for large-scale, nonlinear dynamical systems.
- Research Article
- 10.1088/2634-4386/ae4cc5
- Mar 1, 2026
- Neuromorphic Computing and Engineering
- Harsh Kumar Jadia + 13 more
Abstract Symbol decoding in multiple-input multiple-output (MIMO) wireless communication systems requires the deployment of fast, energy-efficient computing hardware deployable at the edge. The brute-force and exact maximum likelihood (ML) decoder, solved on conventional classical digital hardware to decode MIMO symbols, has exponential time complexity. Approximate classical solvers implemented on the same hardware have polynomial time complexity at the best. In this article, we design an alternative ring-oscillator-based coupled oscillator array (also known as oscillatory neural network (ONN)) to act as an oscillator Ising machine (OIM) and heuristically solve the ML-based MIMO detection problem. Complementary metal oxide semiconductor (CMOS) technology is used to design the ring oscillators, and ferroelectric field effect transistor (FeFET) technology is chosen as the non-volatile memory (NVM) coupling element (X) between the oscillators in this CMOS + X OIM design. For this purpose, we experimentally report high linear range of conductance variation (1 µS to 60 µS) with programming voltage pulses in a HfO 2 -based FeFET device fabricated at 28 nm high-K/ metal gate (HKMG) CMOS technology node. We incorporate the conductance modulation characteristic in SPICE simulation of the ring oscillators connected in an all-to-all fashion through a crossbar array of these FeFET devices. We show that the above range of conductance variation of FeFET is suitable to obtain best OIM performance, thereby making FeFET a suitable NVM device for this application. Our SPICE simulations show that there is no significant performance drop for symbol detection up to MIMO array sizes of 90 transmitting and 90 receiving antennas. Our simulations, combined with analytical treatment using Kuramoto model of oscillators, predict that this designed classical analog OIM, if implemented experimentally, will offer logarithmic scaling of computation time with MIMO size, thereby offering huge improvement (in terms of computation speed) over exact and approximate classical solvers run on conventional digital hardware.
- Research Article
- 10.1016/j.neunet.2025.108268
- Mar 1, 2026
- Neural networks : the official journal of the International Neural Network Society
- Tieliang Gong + 4 more
Nyström-aware approximations for matrix-based Rényi's entropy.
- Research Article
- 10.1016/j.jctb.2025.11.002
- Mar 1, 2026
- Journal of Combinatorial Theory, Series B
- Daniel Lokshtanov + 2 more
When recursion is better than iteration: A linear-time algorithm for directed acyclicity with few error vertices
- Research Article
- 10.1109/tcyb.2025.3635531
- Mar 1, 2026
- IEEE transactions on cybernetics
- Hong-Xiang Hu + 4 more
In this article, the evolution of social power is studied within a unified framework comprising two classes of individuals: oblivious individuals and stubborn individuals, whose opinion dynamics are described by the DeGroot averaging model and the Friedkin-Johnsen model, respectively. A proper subset of the simplex is identified to ensure the well-posedness of social power, and it is demonstrated that the corresponding opinion dynamics is convergent for each issue by restricting the initial social power to this proper subset. Through the reflected appraisal mechanism, a nonlinear mapping governing the social power evolution together with its invariant set is derived, and some sufficient conditions with linear time complexity for the convergence of social power are established by proving that this nonlinear mapping is contractive on the invariant set. Furthermore, for the final social power, it is found that both autocratic and democratic social power cannot be achieved during the evolution, and the average social power of oblivious individuals is larger than that of stubborn individuals, indicating that the network topology has a greater impact on social power than individual stubbornness. In addition, it is observed that the final social power ranking of oblivious individuals is consistent with their centrality ranking, and a rigorous lower bound on the final social power is derived for each stubborn individual. Finally, a numerical example is provided to demonstrate the correctness of the theoretical analysis.
- Research Article
- 10.1109/tnnls.2025.3616320
- Mar 1, 2026
- IEEE transactions on neural networks and learning systems
- Ronghua Shang + 5 more
The anchor-based clustering method is currently a predominant technique for handling large-scale data. However, in multiview data, existing anchor-based methods face a key challenge: balancing individual anchor graph distinctiveness with final consistency. To address this challenge, we propose a large-scale multiview clustering (MVC) method via joint learning of anchor representation and multigraph alignment (ARMGA). Specifically, ARMGA introduces a unified framework that facilitates the concurrent learning of single-view anchor representations and virtual graph-based multigraph alignment. The approach aims to preserve the adaptability of anchor learning across different views, while ensuring the ultimate consistency of the merged anchor graph. Furthermore, ARMGA employs Schatten- $\boldsymbol {p}$ norm on the tensor formed by the adaptive anchor representation, originating from multigraph alignment, to reinforce cross-view consistency. This technique effectively leverages complementary information preserved across views to bolster the overall structure and consensus information. Ultimately, to attenuate the noise impact on the anchor representation matrix, ARMGA capitalizes on the cosine angle information from the low-rank representation as coefficients within the relationship matrix and efficiently reduces computational complexity through deductions. On nine datasets, ARMGA has exhibited a notable improvement in clustering performance indicators by 2%-10% over other algorithms, while also maintaining lower time complexity.
- Research Article
1
- 10.1016/j.cam.2025.116933
- Mar 1, 2026
- Journal of Computational and Applied Mathematics
- Muhammad Zeshan Arshad + 1 more
Exploring time complexity and machine learning scalability for COVID-19 Predictions: A case study from Saudi Arabia
- Research Article
- 10.22214/ijraset.2026.77454
- Feb 28, 2026
- International Journal for Research in Applied Science and Engineering Technology
- Jaswanth Syam Sundar Garugu
The high rate of IoT devices proliferation has considerably augmented the attack space of the current networks, making effective intrusion detection critical in supporting the security and reliability of the Internet of Things environment. Complex and diverse attack patterns in real time cannot easily be detected with the standard security measures, so clever detecting methods are required. A complete analysis of the network traffic using 80 extracted features was performed using the RT-IoT2022 dataset which contained both the normal and malicious network activity of devices such as ThingSpeak-LED, Wipro-Bulb and MQTT-Temp as well as the simulated attacks of Brute-force SSH, DDoS and Nmap scan. ML classifiers such as KNN, Gradient Boosting, XGBoost, SVM, RF, DT and Extremely randomized Trees were used to identify bad behavior. In accuracy, precision, recall, and F1-score, we found that RF and Extremely Randomized Trees worked better than the rest with a 99.9% score on all the scores. Such an approach demonstrates that the level of accuracy in determining intrusions in complex IoT networks can be extremely high and real-time. It is an important milestone towards a proactive mitigation of threats and intelligent implementation of network security
- Research Article
- 10.64898/2026.02.13.705808
- Feb 28, 2026
- bioRxiv : the preprint server for biology
- Alex Zelter + 11 more
Dogma suggests protein quantification is a pre-requisite to LC-MS/MS based proteomics studies. Such quantification allows a standardized ratio of sample to digestion enzyme and enables physical normalization of protein digest loaded onto the mass spectrometer for analysis. Most proteomics studies include these steps. However, there are significant costs in time, money and experimental complexity, associated with performing protein quantification and physical normalization for every sample, especially for larger studies. Proteomics data analysis pipelines typically include computational normalization strategies to compensate for unavoidable systematic biases. These strategies also have the potential to compensate for avoidable variation such as omitting sample amount normalization. Here we investigate the effects of either physically normalizing the amount of protein for each individual sample or leaving it unnormalized. Our results show the relationship between increased protein amount variation in sample input, and the variance of quantified relative abundances of peptides and proteins output after data analysis. The experiments presented here suggest that protein quantification and physical normalization steps can be omitted from some quantitative proteomic experiments without incurring an unacceptable increase in measurement variability after computational normalization has been applied. This work will enable important time and cost saving optimizations to be made to many proteomics workflows.
- Research Article
- 10.1080/03610926.2026.2626154
- Feb 27, 2026
- Communications in Statistics - Theory and Methods
- Shize Ning + 1 more
As a type of non linear uncertain time series model, the uncertain threshold autoregressive model serves as a powerful tool for modeling time series systems with non linear frameworks caused by state transitions. To further expand the applied research on the uncertain threshold autoregressive model, this article first constructs a statistical invariant based on its threshold characteristics and uncertain disturbance terms. On this basis, this article also proposes the moment estimation for the uncertain threshold autoregressive model and designs a numerical algorithm to solve the numerical solution of the moment estimation. To verify the suitability of the estimated uncertain threshold autoregressive model, the article further introduces the residuals and scale residuals of the estimated model and investigates its uncertain hypothesis testing problem as well as point forecast and interval forecast problems based on the above results. Finally, two numerical examples are provided to illustrate the effectiveness of the proposed method.