Articles published on NP-complete
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
2731 Search results
Sort by Recency
- New
- Research Article
- 10.3390/appliedmath6020024
- Feb 6, 2026
- AppliedMath
- Pablo Ramos-Ruiz + 3 more
In recent years, several quantum algorithms have been proposed for addressing combinatorial optimization problems. Among them, the Quantum Approximate Optimization Algorithm (QAOA) has become a widely used approach. However, reported limitations of QAOA have motivated the development of multiple algorithmic variants, including recursive hybrid methods such as the Recursive Quantum Approximate Optimization Algorithm (RQAOA), as well as the Quantum-Informed Recursive Optimization (QIRO) framework. In this work, we integrate the Quantum Alternating Operator Ansatz within the QIRO framework in order to improve its quantum inference stage. Both the original and the enhanced versions of QIRO are applied to the Minimum Vertex Cover problem, an NP-complete problem of practical relevance. Performance is evaluated on a benchmark of Erdös-Rényi graph instances with varying sizes, densities, and random seeds. The results show that the proposed modification leads to a higher number of successfully solved instances across the considered benchmark, indicating that refinements of the variational layer can improve the effectiveness of the QIRO framework.
- Research Article
- 10.3390/foundations6010003
- Jan 30, 2026
- Foundations
- Domenico Cantone + 3 more
As a contribution to automated set-theoretic inferencing, a translation is proposed of conjunctions of literals of the forms x=y∖z, x≠y∖z, and z=x, where x,y,z stand for variables ranging over the von Neumann universe of sets, into quantifier-free Boolean formulae of a rather simple conjunctive normal form. The formulae in the target language involve variables ranging over a Boolean ring of sets, along with a difference operator and relators designating equality, non-disjointness, and inclusion. Moreover, the result of each translation is a conjunction of literals of the forms x=y∖z and x≠y∖z and of implications whose antecedents are isolated literals and whose consequents are either inclusions (strict or non-strict) between variables, or equalities between variables. Besides reflecting a simple and natural semantics, which ensures satisfiability preservation, the proposed translation has quadratic algorithmic time complexity and bridges two languages, both of which are known to have an NP-complete satisfiability problem.
- Research Article
- 10.1038/s41377-025-02178-1
- Jan 20, 2026
- Light, science & applications
- Hai Wei + 14 more
Coherent Ising machines (CIMs) have emerged as a hybrid form of quantum computing devices designed to solve NP-complete problems, offering an exciting opportunity for discovering optimal solutions. Despite challenges such as susceptibility to noise-induced local minima, we achieved notable advantages in improving the computational accuracy and stability of CIMs. We conducted a successful experimental demonstration of CIM via femtosecond laser pumping that integrates optimization strategies across optical and structural dimensions, resulting in significant performance enhancements. The results are particularly promising. An average success rate of 55% was achieved to identify optimal solutions within a Möbius Ladder graph comprising 100 vertices. Compared with other alternatives, the femtosecond pulse results in significantly higher peak power, leading to more pronounced quantum effects and lower pump power in optical fiber-based CIMs. In addition, we have maintained an impressive success rate for a continuous period of 8 hours, emphasizing the practical applicability of CIMs in real-world scenarios. Furthermore, our research extends to the application of these principles in practical applications such as molecular docking and credit scoring. The results presented substantiate the theoretical promise of CIMs, paving the way for their integration into large-scale practical applications.
- Research Article
- 10.64898/2026.01.14.699358
- Jan 14, 2026
- bioRxiv
- Lena Collienne + 5 more
Deep learning offers hope for more efficient phylogenetic inference methods. However, it has yet to have the transformative effect on phylogenetics that it has had in other fields. Here we present a novel approach that combines deep learning with concepts behind current successful phylogenetic algorithms. Specifically, we give the deep learning algorithm access to the output of a phylogenetic dynamic program on the sequence alignment, rather than the raw sequence alignment. The algorithm then learns features based on these phylogenetically processed versions of the sequence data, which provides information that could inform local tree search. For this paper, our goal is simple: predict for each edge in a tree whether it is in a maximum parsimony tree or not. Our model consists of a recurrent neural network that learns features while traversing the input tree, which are used to classify the edge. The model makes high-quality predictions for this NP-complete problem on simulated and empirical datasets for trees of various sizes, and we believe is a stepping stone towards efficient phylogenetic inference using deep learning.
- Research Article
- 10.3390/electronics15010065
- Dec 23, 2025
- Electronics
- Mohammadsadeq Garshasbi Herabad + 3 more
The integration of edge and cloud computing is critical for resource-intensive applications which require low-latency communication, high reliability, and efficient resource utilisation. The service placement problem in these environments poses significant challenges owing to dynamic network conditions, heterogeneous resource availability, and the necessity for real-time decision-making. Because determining an optimal service placement in such networks is an NP-complete problem, the existing solutions rely on fast but suboptimal heuristics or computationally intensive metaheuristics. Neither approach meets the real-time demands of online scenarios, owing to its inefficiency or high computational overhead. In this study, we propose a lightweight learning-based approach for the online placement of services with multi-version components in edge-to-cloud computing. The proposed approach utilises a Shallow Neural Network (SNN) with both weight and power coefficients optimised using a Genetic Algorithm (GA). The use of an SNN ensures low computational overhead during the training phase and almost instant inference when deployed, making it well suited for real-time and online service placement in edge-to-cloud environments where rapid decision-making is crucial. The proposed method (SNN-GA) is specifically evaluated in AR/VR-based remote repair and maintenance scenarios, developed in collaboration with our industrial partner, and demonstrated robust performance and scalability across a wide range of problem sizes. The experimental results show that SNN-GA reduces the service response time by up to 27% compared to metaheuristics and 55% compared to heuristics at larger scales. It also achieves over 95% platform reliability, outperforming heuristics (which remain below 85%) and metaheuristics (which decrease to 90% at larger scales).
- Research Article
- 10.3390/math14010041
- Dec 22, 2025
- Mathematics
- John Abela + 2 more
The celebrated question of whether P=NP continues to define the boundary between the feasible and the intractable in computer science. In this paper, we revisit the problem from two complementary angles: Time-Relative Description Complexity and automated discovery, adopting an epistemic rather than ontological perspective. Even if polynomial-time algorithms for NP-complete problems do exist, their minimal descriptions may have very high Kolmogorov complexity. This creates what we call an epistemic barrier, making such algorithms effectively undiscoverable by unaided human reasoning. A series of structural results—relativization, Natural Proofs, and the Probabilistically Checkable Proofs (PCPs) theorem—already indicate that classical proof techniques are unlikely to resolve the question, which motivates a more pragmatic shift in emphasis. We therefore ask a different, more practical question: what can systematic computational search achieve within these limits? We propose a certificate-first workflow for algorithmic discovery, in which candidate algorithms are considered scientifically credible only when accompanied by machine-checkable evidence. Examples include Deletion/Resolution Asymmetric Tautology (DRAT)/Flexible RAT (FRAT) proof logs for SAT, Linear Programming (LP)/Semidefinite Programming (SDP) dual bounds for optimization, and other forms of independently verifiable certificates. Within this framework, high-capacity search and learning systems can explore algorithmic spaces far beyond manual (human) design, yet still produce artifacts that are auditable and reproducible. Empirical motivation comes from large language models and other scalable learning systems, where increasing capacity often yields new emergent behaviors even though internal representations remain opaque. This paper is best described as a position and expository essay that synthesizes insights from complexity theory, Kolmogorov complexity, and automated algorithm discovery, using Time-Relative Description Complexity as an organising lens and outlining a pragmatic research direction grounded in verifiable computation. We argue for a shift in emphasis from the elusive search for polynomial-time solutions to the constructive pursuit of high-performance heuristics and approximation methods grounded in verifiable evidence. The overarching message is that capacity plus certification offers a principled path toward better algorithms and clearer scientific limits without presuming a final resolution of P=?NP.
- Research Article
- 10.1103/6lkq-8626
- Dec 17, 2025
- Physical review letters
- Yu-Min Hu + 1 more
Spectral degeneracies in Liouvillian generators of dissipative dynamics generically occur as exceptional points, where the corresponding non-Hermitian operator becomes nondiagonalizable. Steady states, i.e., zero modes of Liouvillians, are considered a fundamental exception to this rule since a no-go theorem excludes nondiagonalizable degeneracies there. Here, we demonstrate that the crucial issue of diverging timescales in dissipative state preparation is largely tantamount to an asymptotic approach toward the forbidden scenario of an exceptional steady state in the thermodynamic limit. With case studies ranging from NP-complete satisfiability problems encoded in a quantum master equation to the dissipative preparation of a symmetry protected topological phase, we reveal the close relation between the computational complexity of the problem at hand, and the finite size scaling toward the exceptional steady state, exemplifying both exponential and polynomial scaling. Formally treating the weight W of quantum jumps in the Lindblad master equation as a parameter, we show that exceptional steady states at the physical value W=1 may be understood as a critical point hallmarking the onset of dynamical instability.
- Research Article
2
- 10.22363/2312-8143-2025-26-1-39-51
- Dec 15, 2025
- RUDN Journal of Engineering Researches
- Aleksey F Rogachev + 1 more
The construction of a university class schedule is one of the NP-complete problems. In cases of significant amounts of input data, typical for a multilevel university, and a set of numerous constraints, the search for an acceptable solution may take a long time or may not be optimal. The paper presents the peculiarities of a multilevel university and considers a computerized approach to the construction of an ontological model for the automation of academic scheduling, used to optimize the process of its compilation. The paper utilizes methods of semantic description of the subject area, including computer support for ontological model building. On the basis of the given analysis of the main problems the ontological approach to the formation of data structure for the tasks of training schedules compilation is substantiated. The proposed approach is realized taking into account the conditions of multilevel higher education institution. The ontological model of automated scheduling is developed. The method of solving the problem of scheduling of a multilevel university with the application of genetic algorithm (GA) using penalty functions to take into account the limitations of the mathematical model is presented. The computer program developed on the basis of the constructed class diagram provides the construction of the schedule of academic classes of a multilevel university, effective according to the integral quality criterion.
- Research Article
- 10.3390/app152413111
- Dec 12, 2025
- Applied Sciences
- Jieqing Tan + 1 more
The Satisfiability Problem (SAT), a fundamental NP-complete problem, is widely applied in integrated circuit verification, artificial intelligence planning, and other fields, where the growing scale and complexity of practical problems demand higher solving efficiency. Due to redundant search paths, serialized reasoning steps, and inefficient pure literal detection, traditional serial SAT solvers require efficient parallelization of the pure literal rule. This paper adopts a parallel solving algorithm for the pure literal rule based on matrix representation. The algorithm can solve the shortcomings of poor universality, insufficient parallel collaborative mechanisms, and clause reduction. We first introduce a Clause-Numerical Incidence Matrix (CNIM) representation to provide a unified mathematical model for parallel operations. Second, we design a Column Vectors Pure Literal Parallel Topological Detection (CVPLPTD) algorithm that achieves pure literal detection with O(mn/p) time complexity (p being the number of parallel threads) within the coefficient range [1.0×mn/p, 1.2×mn/p]. Finally, we adopt a dynamic matrix reduction strategy that compresses the matrix scale through row and column deletion after each pure literal assignment to reduce computational load. These innovations integrate matrix algebra and parallel computing, effectively breaking through the efficiency limitations of solving large-scale SAT problems while ensuring good universality across different computing platforms.
- Research Article
- 10.1177/1748006x251395155
- Dec 2, 2025
- Proceedings of the Institution of Mechanical Engineers, Part O: Journal of Risk and Reliability
- Zhaofei Dong + 3 more
Optimizing destruction resistance of chemical material networks can reduce the occurrence and propagation of cascading failures. Most destructive resistance indexes only assess the final degree of destructive paralysis, making it difficult to accurately identify weak links. To solve this problem, toughness index is introduced. However, because it is an NP-complete problem that lacks a polynomial-time solution and cannot achieve automatic optimization and autonomous decision-making, the optimization is not guaranteed to be optimal. In order to solve the above problems, this article proposes a binary artificial bee colony algorithm (binary ABC algorithm) based on discrete space for optimizing cascade failures resistance model. Firstly, a fitness function is designed based on toughness theory. Then improve the honey source generation and update mechanisms in the ABC algorithm, and transform the search space into D-dimensional binary space, the toughness and cut point set of the network are obtained by simulation. Finally, the proposed method is compared with other optimization methods and attack strategies respectively, to determine the weak nodes in the chemical material network and optimize its destruction resistance. The case study shows that the model is feasible and can automatically select optimal attacks to identify weak links that need to be protected. The value of the destruction resistance indicator increased from 0.2748 to 0.5909, providing a theoretical basis for cascading fault analysis and prevention in chemical material networks.
- Research Article
- 10.1145/3767710
- Nov 10, 2025
- Proceedings of the ACM on Management of Data
- Mahmoud Abo Khamis + 4 more
The class of hierarchical queries is known to define the boundary of the dichotomy between tractability and intractability for the following two extensively studied problems about self-join free Boolean conjunctive queries (SJF-BCQ): (i) evaluating a SJF-BCQ on a tuple-independent probabilistic database; (ii) computing the Shapley value of a fact in a database on which a SJF-BCQ evaluates to true. Here, we establish that hierarchical queries define also the boundary of the dichotomy between tractability and intractability for a different natural algorithmic problem, which we call the bag-set maximization problem. The bag-set maximization problem associated with a SJF-BCQ Q asks: given a database D, find the biggest value that Q takes under bag semantics on a database D' obtained from D by adding at most θ facts from another given database D r . For non-hierarchical queries, we show that the bag-set maximization problem is an NP-complete optimization problem. More significantly, for hierarchical queries, we show that all three aforementioned problems (probabilistic query evaluation, Shapley value computation, and bag-set maximization) admit a single unifying polynomial-time algorithm that operates on an abstract algebraic structure, called a 2-monoid . Each of the three problems requires a different instantiation of the 2-monoid tailored for the problem at hand.
- Research Article
- 10.1116/6.0004876
- Oct 30, 2025
- Journal of Vacuum Science & Technology B
- Ting-Hao Hsu + 10 more
This work presents a stochastic analog SAT solver to address the computational bottleneck in multiple patterning lithography layout decomposition. The decomposition task is modeled as a graph-coloring problem and transformed into a Boolean satisfiability (SAT) instance solvable by the analog solver that we invented. Leveraging the inherent parallelism of a programmable crossbar array and stochastic perturbations, the solver rapidly converges to valid solutions. The prototype achieves over 100-fold speedup compared to conventional digital SAT solvers and demonstrates near-linear scalability with increasing layout size. These results highlight the effectiveness of analog computing for solving NP-complete problems in very large-scale integrated design automation.
- Research Article
- 10.3390/a18100667
- Oct 21, 2025
- Algorithms
- Sílvia De Castro Pereira + 2 more
The aging of the Portuguese population is a multifaceted challenge that requires a coordinated and comprehensive response from society. In this context, social service institutions play a fundamental role in providing aid and support to the elderly, ensuring that they can enjoy a dignified and fulfilling life even in the face of the challenges of aging. This research proposes a Balanced Multiple Traveling Salesman Problem based on the Ant Colony Optimization algorithm (ACO-BmTSP) to solve a distribution of meals problem in the municipality of Mogadouro, Portugal. The Multiple Traveling Salesman Problem (mTSP) is an NP-complete problem where m salesmen perform a shortest tour between different cities, visiting each only once. The primary purpose is to minimize the sum of all distance traveled by all salesmen keeping the tours balanced. This paper shows the results of computing obtained for three, four, and five agents with this new approach and their comparison with other approaches like the standard Particle Swarm Optimization and Ant Colony Optimization algorithms. As can be seen, the ACO-BmTSP, in addition to obtaining much more equitable paths, also achieves better results in lower total costs. In conclusion, some benchmark problems were used to evaluate the efficiency of ACO-BmTSP, and the results clearly indicate that this algorithm represents a strong alternative to be considered when the problem size involves fewer than one hundred locations.
- Research Article
- 10.61467/2007.1558.2025.v16i4.586
- Oct 12, 2025
- International Journal of Combinatorial Optimization Problems and Informatics
- Sonia Navarro Flores + 1 more
The SAT problem is important in the theory of computational complexity. It has been deeply studied because solutions for fragments of SAT can be transformed into solutions for several CSPs, including problems in areas such as Artificial Intelligence and Operations Research. Although SAT is an NP-complete problem, it is known that SAT is fixed-parameter tractable if we take any hypertree width as a parameter. In this work, we present several hypergraphs and countable classes of hypergraphs. For these classes of hypergraphs, we analyze their generalized hypertree width to prove that all the CSPs modeled with those hypergraphs are tractable.
- Research Article
- 10.1145/3763238
- Oct 7, 2025
- ACM Transactions on Algorithms
- Pallavi Jain + 6 more
Max-SAT with cardinality constraint ( CC-Max-Sat ) is one of the classical NP-complete problems, that generalizes Maximum Coverage , Partial Vertex Cover , Max-2-SAT with bisection constraints, and has been extensively studied across all algorithmic paradigms. In this problem, we are given a CNF formula \(\Phi\) , and a positive integer \( k \) , and the goal is to find an assignment \(\beta\) with at most \( k \) variables set to true (also called a \( k \) -weight assignment) such that the number of clauses satisfied by \(\beta\) is maximized. The problem is known to admit an approximation algorithm with factor \(1-\frac{1}{e}\) , which is probably optimal. Furthermore, assuming Gap-Exponential Time Hypothesis (Gap-ETH), for any \(\epsilon > 0\) and any function \( h \) , no \(h(k)(n+m)^{o(k)}\) time algorithm can approximate Maximum Coverage (a monotone version of CC-Max-Sat ) with \( n \) elements and \( m \) sets to within a factor \((1-\frac{1}{e}+\epsilon)\) , even with a promise that there exist \( k \) sets that fully cover the whole universe. In fact, the problem is hard to approximate within 0.929, assuming Unique Games Conjecture, even when the input formula is 2-CNF. These intractable results lead us to explore families of formula, where we can circumvent these barriers. Toward this, we consider \(K_{d,d}\) -free formulas (that is, the clause-variable incidence bipartite graph of the formula excludes \(K_{d,d}\) as an induced subgraph). We show that for every \(\epsilon > 0\) , there exists an algorithm for CC-Max-Sat on \(K_{d,d}\) -free formulas with approximation ratio \((1-\epsilon)\) and running in time \(2^{{\mathcal{O}}((\frac{dk}{\epsilon})^{d})}(n+m)^{{\mathcal{O}}(1)}\) (these algorithms are called FPT-AS). For Maximum Coverage on \(K_{d,d}\) -free set families, we obtain FPT-AS with running time \((\frac{dk}{\epsilon})^{{\mathcal{O}}(dk)}n^{{\mathcal{O}}(1)}\) . Our second result considers “optimizing \( k \) ,” with fixed covering constraint for the Maximum Coverage problem. To explain our result, we first recast the Maximum Coverage problem as the Max Red Blue Dominating Set with Covering Constraint problem. Here, the input is a bipartite graph \(G=(A,B,E)\) , a positive integer \( t \) , and the objective is to find a minimum sized subset \(S\subseteq A\) , such that \(|N(S)|\) (the size of the set of neighbors of \( S \) ) is at least \( t \) . We design an additive approximation algorithm for Max Red Blue Dominating Set with Covering Constraint , on \(K_{d,d}\) -free bipartite graphs, running in FPT time. In particular, if \( k \) denotes the minimum size of \(S\subseteq A\) , such that \(|N(S)|\geq t\) , then our algorithm runs in time \((kd)^{{\mathcal{O}}(kd)}n^{{\mathcal{O}}{(1)}}\) and returns a set \(S^{\prime}\) such that \(|N(S^{\prime})|\geq t\) and \(|S^{\prime}|\leq k+1\) . This is in sharp contrast to the fact that, even a special case of our problem, namely, the Partial Vertex Cover problem (or Max \( k \) -VC ) is W[1]-hard, parameterized by \( k \) . Thus, we get the best possible parameterized approximation algorithm for the Maximum Coverage problem on \(K_{d,d}\) -free bipartite graphs.
- Research Article
- 10.1080/00051144.2025.2569114
- Oct 2, 2025
- Automatika
- Jing Zhang + 2 more
Since the launch of the post-quantum cryptography standardization project by NIST, post-quantum cryptography has become a prominent research area. Non-commutative cryptography constructed using NP-complete problems is widely regarded resistant to quantum computing attacks, so it has become an important branch of post-quantum cryptography. Nonetheless, the existing non-commutative cryptographic protocol still exhibits certain shortcomings. In this paper, a non-commutative combinatorial semigroup with matrix power function is constructed from a modified medial semigroup by the method of semidirect product, and then a key exchange protocol is developed on it to provide a kind of novel non-commutative cryptographic protocol. Due to the non-commutativity of cryptographic platform, the proposed protocol could perform well in respect of antiquantum computing attack, which is superior to traditional cryptographic protocol. In addition, the security analysis shows that the protocol has significant advantages in resisting algebraic and brute force attacks, as well as aganist quantum cryptanalysis; the complexity analysis demonstrates that computation and storage complexities are of polynomial order, ensuring efficient operation even for large matrix sizes.
- Research Article
- 10.35234/fumbd.1646674
- Sep 30, 2025
- Fırat Üniversitesi Mühendislik Bilimleri Dergisi
- Cezayir Karaca + 1 more
The map coloring problem is a classical NP-complete problem that requires adjacent regions to be colored differently and is encountered in many real-world applications. Numerous algorithms have been developed to solve this problem. In this study, the Malatya Vertex Coloring (MVC) Algorithm, which presents a novel and original approach to solving the problem, is applied. This algorithm aims to identify influential vertices to reduce the number of colors used in graphs and to complete the coloring process more efficiently. Additionally, the applicability of the algorithm to real-world problems is also evaluated. The MVC Algorithm calculates the Malatya Centrality value for each vertex in the graph; it selects the vertex with the highest value, colors it with a color different from its neighbors, and then removes it from the graph. This process continues until all vertices are colored. The algorithm has been successfully applied to maps of Asia, Europe, districts of Istanbul, Turkey, U.S. states, and the world, and the results demonstrate the effectiveness of the algorithm. The advantages of the MVC Algorithm include its predictability, as well as its ability to operate in polynomial time and space. In this respect, the MVC Algorithm offers an alternative solution approach to the classical Four Color Theorem in the context of the map coloring problem.
- Research Article
- 10.1007/s10586-025-05586-5
- Sep 19, 2025
- Cluster Computing
- Ali Gunduz + 2 more
Abstract One of the biggest problems in the rapidly developing cloud computing field in recent years is efficient task scheduling. Task scheduling in cloud computing is recognized as an NP-complete problem, presenting significant challenges due to the large task sizes and the complexity of efficiently managing diverse computational resources. Task scheduling in cloud computing aims to ensure that tasks are assigned to virtual machines to minimize completion time and maximize resource utilization. To address these challenges, this study introduces a novel hybrid optimization algorithm named Differential Evolution Cat Swarm Optimization (DECSO). Unlike traditional hybrid approaches, DECSO dynamically balances exploration and exploitation, ensuring a more adaptive and efficient task scheduling strategy. DECSO synergizes the global exploration ability and adaptive capabilities of Differential Evolution (DE) with the local search efficiency and explorative and exploitative strengths of Cat Swarm Optimization (CSO). The proposed DECSO algorithm is compared with PSO (Particle Swarm Optimization) and CSO via the CloudSim simulation experiment platform. DECSO’s performance is evaluated using makespan, resource utilization, and migration time, which are critical metrics for efficient cloud task scheduling. The experimental results demonstrate that DECSO achieves up to 22.6% reduction in MakeSpan compared to CSO and 9.6% compared to PSO, 11.9% improvement in resource utilization compared to CSO and 14.7% compared to PSO, and 20.6% reduction in migration time compared to CSO and 11.2% compared to PSO. The results obtained from the simulation studies carried out show that the presented optimization model provides significant improvements in terms of MakeSpan, resource utilization, and migration time.
- Research Article
- 10.3390/math13183005
- Sep 17, 2025
- Mathematics
- Jehn-Ruey Jiang
This paper proposes a quantum algorithm, named Dicke state quantum search (DSQS), to set qubits in the Dicke state |Dkn⟩ of D states in superposition to locate the target inputs or solutions of specific patterns among 2n unstructured input instances, where n is the number of input qubits and D=nk=O(nk) for min(k,n−k)≪n/2. Compared to Grover’s algorithm, a famous quantum search algorithm that calls an oracle and a diffuser O(2n) times, DSQS requires no diffuser and calls an oracle only once. Furthermore, DSQS does not need to know the number of solutions in advance. We prove the correctness of DSQS with unitary transformations, and show that each solution can be found by DSQS with an error probability less than 1/3 through O(nk) repetitions, as long as min(k,n−k)≪n/2. Additionally, this paper proposes a classical algorithm, named DSQS-VCP, to generate quantum circuits based on DSQS for solving the k-vertex cover problem (k-VCP), a well-known NP-complete (NPC) problem. Complexity analysis demonstrates that DSQS-VCP operates in polynomial time and that the quantum circuit generated by DSQS-VCP has a polynomial qubit count, gate count, and circuit depth as long as min(k,n−k)≪n/2. We thus conclude that the k-VCP can be solved by the DSQS-VCP quantum circuit in polynomial time with an error probability less than 1/3 under the condition of min(k,n−k)≪n/2. Since the k-VCP is NP-complete, NP and NPC problems can be polynomially reduced to the k-VCP. If the reduced k-VCP instance satisfies min(k,n−k)≪n/2, then both the instance and the original NP/NPC problem instance to which it corresponds can be solved by the DSQS-VCP quantum circuit in polynomial time with an error probability less than 1/3. All statements of polynomial algorithm execution time in this paper apply only to VCP instances and similar instances of other problems, where min(k,n−k)≪n/2. Thus, they imply neither NP ⊆ BQP nor P = NP. In this restricted regime of min(k,n−k)≪n/2, the Dicke state subspace has a polynomial size of D=nk=O(nk), and our DSQS algorithm samples from it without asymptotic superiority over exhaustive enumeration. Nevertheless, DSQS may be combined with other quantum algorithms to better exploit the strengths of quantum computation in practice. Experimental results using IBM Qiskit packages show that the DSQS-VCP quantum circuit can solve the k-VCP successfully.
- Research Article
- 10.3390/sym17091523
- Sep 12, 2025
- Symmetry
- Bahman Arasteh + 3 more
Software maintenance is one of the most expensive phases in software development, especially when complex source code is the only available artifact. Clustering software modules and generating a structured architectural model can significantly reduce the effort and cost of maintenance. This study aims to achieve high-quality modularization by maximizing intra-cluster cohesion, minimizing inter-cluster coupling, and optimizing overall modular quality. Since finding optimal clustering is an NP-complete problem, many existing methods suffer from poor modular structures, instability, and inconsistent results. To overcome these limitations, this paper proposes a module clustering method using a discrete bedbug optimizer. In software architecture, symmetry refers to the balanced and structured arrangement of modules. In the proposed method, module clustering aims to identify and group related modules based on structural and behavioral similarities, reflecting symmetrical properties in the source code. Conversely, asymmetries, such as modules with irregular dependencies, can indicate architectural flaws. The method was evaluated on ten widely used real-world software datasets. The experimental results show that the proposed algorithm consistently delivers superior modularization quality, with an average score of 2.806 and a well-balanced trade-off between cohesion and coupling. Overall, this research presents an effective solution for software module clustering and provides better architecture recovery and more maintainable systems.