Correction: Directed capacity-preserving subgraphs: hardness and exact polynomial algorithms
Correction: Directed capacity-preserving subgraphs: hardness and exact polynomial algorithms
- Research Article
13
- 10.1103/physrevlett.109.157205
- Oct 10, 2012
- Physical Review Letters
We study both analytically, using the renormalization group (RG) to two loop order, and numerically, using an exact polynomial algorithm, the disorder-induced glass phase of the two-dimensional XY model with quenched random symmetry-breaking fields and without vortices. In the super-rough glassy phase, i.e., below the critical temperature T(c), the disorder and thermally averaged correlation function B(r) of the phase field θ(x), B(r)=([θ(x)-θ(x+r)](2)) behaves, for r >> a, as B(r) is approximately equal to A(τ)ln(2)(r/a) where r=|r| and a is a microscopic length scale. We derive the RG equations up to cubic order in τ=(T(c)-T)/T(c) and predict the universal amplitude A(τ)=2τ(2)-2τ(3)+O(τ(4)). The universality of A(τ) results from nontrivial cancellations between nonuniversal constants of RG equations. Using an exact polynomial algorithm on an equivalent dimer version of the model we compute A(τ) numerically and obtain a remarkable agreement with our analytical prediction, up to τ≈0.5.
- Research Article
1
- 10.18255/1818-1015-2014-4-54-63
- Jan 1, 2014
- Modeling and Analysis of Information Systems
A combinatorial optimization problem is called stable if its solution is preserved under perturbation of the input parameters that do not exceed a certain threshold – the stability radius. In [1–3] exact polynomial algorithms have been built for some NP-hard problems on cuts in the assumption of the entrance stability. In this paper we show how to accelerate some algorithms for sufficiently stable polynomial problems. The approach is illustrated by the well-known problem of the minimum cut (MINCUT). We built an O(n ² ) exact algorithm for solving n-stable instance of the MINCUT problem. Moreover, we present a polynomial algorithm for calculating the stability radius and a simple criterion for checking n-stability of the MINCUT problem.
- Research Article
3
- 10.18725/oparu-1041
- Mar 20, 2009
- World Academy of Science, Engineering and Technology, International Journal of Computer, Electrical, Automation, Control and Information Engineering
During the last years, the genomes of more and more species have been sequenced, providing data for phylogenetic reconstruction based on genome rearrangement measures. A main task in all phylogenetic reconstruction algorithms is to solve the median of three problem. Although this problem is NP-hard even for the simplest distance measures, there are exact algorithms for the breakpoint median and the reversal median that are fast enough for practical use. In this paper, this approach is extended to the transposition median as well as to the weighted reversal and transposition median. Although there is no exact polynomial algorithm known even for the pairwise distances, we will show that it is in most cases possible to solve these problems exactly within reasonable time by using a branch and bound algorithm. Keywords—Comparative genomics, genome rearrangements, median, reversals, transpositions.
- Research Article
- 10.1134/s0081543818090158
- Dec 1, 2018
- Proceedings of the Steklov Institute of Mathematics
Computational complexity and exact polynomial algorithms are reported for the problem of stabbing a set of straight line segments with a least cardinality set of disks of fixed radii r > 0, where the set of segments forms a straight line drawing G = (V,E) of a plane graph without edge crossings. Similar geometric problems arise in network security applications (Agarwal et al., 2013). We establish the strong NP-hardness of the problem for edge sets of Delaunay triangulations, Gabriel graphs, and other subgraphs (which are often used in network design) for r ∈ [dmin, ηdmax] and some constant η, where dmax and dmin are the Euclidean lengths of the longest and shortest graph edges, respectively.
- Research Article
43
- 10.1007/s003579900019
- Jan 1, 1998
- Journal of Classification
Clustering with a criterion which minimizes the sum of squared distances to cluster centroids is usually done in a heuristic way. An exact polynomial algorithm, with a complexity in O(N p+1 logN), is proposed for minimum sum of squares hierarchical divisive clustering of points in a p-dimensional space with small p. Empirical complexity is one order of magnitude lower. Data sets with N = 20000 for p = 2, N = 1000 for p = 3, and N = 200 for p = 4 are clustered in a reasonable computing time.
- Research Article
- 10.1007/s00236-024-00475-7
- Jan 28, 2025
- Acta Informatica
We introduce and discuss the Minimum Capacity-Preserving Subgraph (MCPS) problem: given a directed graph with edge capacities cap and a retention ratio α∈(0,1), find the smallest subgraph that, for each pair of vertices (u, v), preserves at least a fraction α of a maximum u-v-flow’s value. This problem originates from the practical setting of reducing the power consumption in a computer network: it models turning off as many links as possible, while retaining the ability to transmit at least α times the traffic compared to the original network. First we prove that MCPS is NP-hard already on a restricted set of directed acyclic graphs (DAGs) with unit edge capacities. Our reduction also shows that a closely related problem (which only considers the arguably most complicated core of the problem in the objective function) is NP-hard to approximate within a sublogarithmic factor already on DAGs. In terms of positive results, we present two algorithms that solve MCPS optimally on directed series-parallel graphs (DSPs): a simple linear-time algorithm for the special case of unit edge capacities and a cubic-time dynamic programming algorithm for the general case of non-uniform edge capacities. Further, we introduce the family of laminar series-parallel graphs (LSPs), a generalization of DSPs that also includes cyclic and very dense graphs. Their properties allow us to solve MCPS on LSPs by employing our DSP-algorithms as subroutines. In addition, we give a separate quadratic-time algorithm for MCPS on LSPs with unit edge capacities that also yields straightforward quadratic time algorithms for several related problems such as Minimum Equivalent Digraph and Directed Hamiltonian Cycle on LSPs.
- Conference Article
- 10.5753/ctd.2023.229529
- Aug 6, 2023
Matching problems in graphs have been studied for a long time, achieving important results in both theoretical and practical aspects. Over the decades, many variations of matching problems and results were studied. Some of them can be solved in polynomial time, while others apparently cannot, unless P = NP. In this thesis, we briefly present the history of matching problems and their complexities, along with a survey of some of their variations, and their state-of-the-art. We also give new results on one of these variations: P-matchings. A matching M is a P-matching if the subgraph induced by the endpoints of the edges of M satisfies property P . As examples, for appropriate choices of P , the problems INDUCED MATCHING, UNIQUELY RESTRICTED MATCHING, ACYCLIC MATCHING, CONNECTED MATCHING and DISCONNECTED MATCHING arise. In this thesis, we focus our study on three: DISCONNECTED MATCHING, CONNECTED MATCHING and its weighted version, WEIGHTED CONNECTED MATCHING. To this end, we developed NPcompleteness proofs, classical and parameterized complexity analysis, as well as exact polynomial algorithms, considering these problems in general and subject to some constraints.
- Dissertation
- 10.1184/r1/6721211.v1
- May 1, 2017
Collaborative filtering approaches have produced some of the most accurate and personalized recommender systems to date by mining for similarities in large-scale datasets. However, despite their stellar performance in accuracy based metrics, researchers have demonstrated a propensity by such algorithms to exaggerate the biases inherent in the data such as popularity or the affinity of users to certain kinds of content. Meanwhile, recommender systems have only grown in importance and have become an integral part of the internet ecosystem, with many users interacting with many recommender systems daily on e-commerce sites, social networks and apps. Therefore, the biases in recommender systems have come to critically impact a company’s bottom line, user satisfaction levels and public image, making it an imperative to develop recommendation diversification methods to explicitly counteract them. In this thesis we make three key contributions to the growing field of sales diversity, which aims to reduce popularity biases inherent in many collaborative filtering based recommender systems. First, we consider the problem of making item-item recommendations, with the goal of redundantly linking from popular items to less popular items in order to bring them more exposure on the web. Next, we consider to the setting of user-item recommendations, and develop a metric we call “discrepancy” to measure the distance between the recommendation distribution desired by a business and the distribution obtained by the recommender system, and develop algorithms to reduce discrepancy while maintaining high recommendation quality. Lastly, we turn our attention to item catalogs and user bases where items and users are clustered into disjoint or overlapping subgroups, and develop metrics to quantify the recommendation diversity experienced both by the users and the items. Our approaches to all three of these problems are unified under a framework of subgraph selection, the use of network flow problems for modeling, and a focus on providing either exact polynomial algorithms or efficient approximation algorithms with concrete performance guarantees. This stands in contrast with existing approaches, most of which are reranking based heuristics for which no performance guarantees can be given. In each of these settings, we augment our theoretical findings with an empirical evaluation on real life datasets from online retailers or standard recommender system datasets provided by Netflix and the MovieLens group, and show that our methods provide superior sales diversity value when compared with competing approaches.
- Book Chapter
20
- 10.1016/s1062-7901(01)80009-4
- Jan 1, 2001
- Phase Transitions and Critical Phenomena
Exact combinatorial algorithms: Ground states of disordered systems
- Research Article
557
- 10.1109/69.979982
- Jan 1, 2002
- IEEE Transactions on Knowledge and Data Engineering
Microaggregation is a statistical disclosure control technique for microdata disseminated in statistical databases. Raw microdata (i.e., individual records or data vectors) are grouped into small aggregates prior to publication. Each aggregate should contain at least k data vectors to prevent disclosure of individual information, where k is a constant value preset by the data protector. No exact polynomial algorithms are known to date to microaggregate optimally, i.e., with minimal variability loss. Methods in the literature rank data and partition them into groups of fixed-size; in the multivariate case, ranking is performed by projecting data vectors onto a single axis. In this paper, candidate optimal solutions to the multivariate and univariate microaggregation problems are characterized. In the univariate case, two heuristics based on hierarchical clustering and genetic algorithms are introduced which are data-oriented in that they try to preserve natural data aggregates. In the multivariate case, fixed-size and hierarchical clustering microaggregation algorithms are presented which do not require data to be projected onto a single dimension; such methods clearly reduce variability loss as compared to conventional multivariate microaggregation on projected data.
- Research Article
24
- 10.1016/j.cor.2014.07.017
- Aug 4, 2014
- Computers & Operations Research
A continuous network location problem for a single refueling station on a tree
- Conference Article
188
- 10.1145/2063576.2063718
- Oct 24, 2011
We study the problem of discovering a team of experts from a social network. Given a project whose completion requires a set of skills, our goal is to find a set of experts that together have all of the required skills and also have the minimal communication cost among them. We propose two communication cost functions designed for two types of communication structures. We show that the problem of finding the team of experts that minimizes one of the proposed cost functions is NP-hard. Thus, an approximation algorithm with an approximation ratio of two is designed. We introduce the problem of finding a team of experts with a leader. The leader is responsible for monitoring and coordinating the project, and thus a different communication cost function is used in this problem. To solve this problem, an exact polynomial algorithm is proposed. We show that the total number of teams may be exponential with respect to the number of required skills. Thus, two procedures that produce top-k teams of experts with or without a leader in polynomial delay are proposed. Extensive experiments on real datasets demonstrate the effectiveness and scalability of the proposed methods.
- Research Article
- 10.3390/a17110504
- Nov 4, 2024
- Algorithms
This paper studies the security issues for cyber–physical systems, aimed at countering potential malicious cyber-attacks. The main focus is on solving the problem of extracting the most vulnerable attack path in a known attack graph, where an attack path is a sequence of steps that an attacker can take to compromise the underlying network. Determining an attacker’s possible attack path is critical to cyber defenders as it helps identify threats, harden the network, and thwart attacker’s intentions. We formulate this problem as a path-finding optimization problem with logical constraints represented by AND and OR nodes. We propose a new Dijkstra-type algorithm that combines elements from Dijkstra’s shortest path algorithm and the critical path method. Although the path extraction problem is generally NP-hard, for the studied special case, the proposed algorithm determines the optimal attack path in polynomial time, O(nm), where n is the number of nodes and m is the number of edges in the attack graph. To our knowledge this is the first exact polynomial algorithm that can solve the path extraction problem for different attack graphs, both cycle-containing and cycle-free. Computational experiments with real and synthetic data have shown that the proposed algorithm consistently and quickly finds optimal solutions to the problem.
- Conference Article
- 10.1109/cec.2017.7969296
- Jun 1, 2017
Computing evolutionary distances using gene order data is a complex combinatory problem; nevertheless, for specific metrics exact polynomial algorithms were proposed, having in many cases non trivial approaches. This scenario can become harder if we want to reconstruct phylogenies based on gene order data: first it is necessary to explore the search space of possible tree structures which is well-known to be exponential; second, it is necessary a method for evaluating the cost of these trees, i.e. to find a labeling of the internal nodes that leads to the most parsimonious cost of a tree under a given evolutionary distance. The latter problem was shown to be NP-hard even for 3 genomes (median problem) under many evolutionary distances. In this paper we propose a variable neighborhood search approach for solving the large phylogeny problem for data based on gene orders. Also, a greedy approach is proposed for the small phylogeny problem aiming to reduce the running time of the Kovac et al. dynamic programming approach. Our proposed algorithms were implemented as the software called HELPHY. Experiments showed that the running time is improved for finding trees with good scores (reversal distance) for the Campanulaceae dataset, and a new tree structure was found having the best known score (double cut and join distance) for the case of Hemiascomycetes dataset.
- Conference Article
13
- 10.1109/qshine.2004.11
- Oct 18, 2004
Given a communication network modeled as a directed graph with a delay parameter associated with each link, we consider the problem of determining the most probable delay constrained path from a source node to a destination node. Assuming that the link delays are random variables with continuous and differentiable probability density function and using the central limit theorem this problem can be formulated as a path problem which involves simultaneously optimizing two additive path parameters. Two cases arise. When there is one path with mean delay less than the delay bound, we present an exact pseudo polynomial algorithm, a fully polynomial time /spl epsi/-approximation algorithm and a strongly polynomial heuristic algorithm. In the unlikely case when this assumption is violated, the problem is shown to be NP-hard and no constant factor approximation algorithm exists if P /spl ne/ NP. We also study the path protection problem under inaccurate state information.
- Research Article
- 10.1007/s00236-025-00504-z
- Oct 27, 2025
- Acta Informatica
- Research Article
- 10.1007/s00236-025-00507-w
- Oct 13, 2025
- Acta Informatica
- Research Article
- 10.1007/s00236-025-00505-y
- Aug 26, 2025
- Acta Informatica
- Research Article
- 10.1007/s00236-025-00502-1
- Aug 9, 2025
- Acta Informatica
- Research Article
- 10.1007/s00236-025-00500-3
- Aug 6, 2025
- Acta Informatica
- Research Article
- 10.1007/s00236-025-00495-x
- Aug 4, 2025
- Acta Informatica
- Addendum
- 10.1007/s00236-025-00493-z
- Jul 4, 2025
- Acta Informatica
- Research Article
- 10.1007/s00236-025-00494-y
- Jun 27, 2025
- Acta Informatica
- Research Article
- 10.1007/s00236-025-00490-2
- Jun 1, 2025
- Acta Informatica
- Research Article
- 10.1007/s00236-025-00492-0
- May 30, 2025
- Acta Informatica
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.