Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
Space-Efficient Parameterized Algorithms on Graphs of Low Shrubdepth

Dynamic programming on various graph decompositions is one of the most fundamental techniques used in parameterized complexity. Unfortunately, even if we consider concepts as simple as path or tree decompositions, such dynamic programming uses space that is exponential in the decomposition’s width, and there are good reasons to believe that this is necessary. However, it has been shown that in graphs of low treedepth it is possible to design algorithms that achieve polynomial space complexity without requiring worse time complexity than their counterparts working on tree decompositions of bounded width. Here, treedepth is a graph parameter that, intuitively speaking, takes into account both the depth and the width of a tree decomposition of the graph, rather than the width alone. Motivated by the above, we consider graphs that admit clique expressions with bounded depth and label count, or equivalently, graphs of low shrubdepth. Here, shrubdepth is a bounded-depth analogue of cliquewidth, in the same way as treedepth is a bounded-depth analogue of treewidth. We show that also in this setting, bounding the depth of the decomposition is a deciding factor for improving the space complexity. More precisely, we prove that on n -vertex graphs equipped with a tree-model (a decomposition notion underlying shrubdepth) of depth d and using k labels, • Independent Set and Dominating Set can be solved in time \(2^{\mathcal {O}(dk)}\cdot n^{\mathcal {O}(1)} \) using \(\mathcal {O}(dk\log n) \) space; • Max Cut can be solved in time \(n^{\mathcal {O}(dk)} \) using \(\mathcal {O}(dk\log n) \) space. We also establish a lower bound, conditional on a certain assumption about the complexity of Longest Common Subsequence , which shows that at least in the case of Independent Set the exponent of the parametric factor in the time complexity has to grow with d if one wishes to keep the space complexity polynomial..

Read full abstract
Open Access Icon Open AccessJust Published Icon Just Published
Lower Bounds for Learning Quantum States with Single-Copy Measurements

We study the problems of quantum tomography and shadow tomography using measurements performed on individual, identical copies of an unknown d -dimensional state. We first revisit known lower bounds [ 23 ] on quantum tomography with accuracy ε in trace distance, when the measurement choices are independent of previously observed outcomes, i.e., they are nonadaptive. We give a succinct proof of these results through the χ 2 -divergence between suitable distributions. Unlike prior work, we do not require that the measurements be given by rank-one operators. This leads to stronger lower bounds when the learner uses measurements with a constant number of outcomes (e.g., two-outcome measurements). In particular, this rigorously establishes the optimality of the folklore “Pauli tomography” algorithm in terms of its sample complexity. We also derive novel bounds of \(\Omega (r^2 d/\epsilon ^2)\) and \(\Omega (r^2 d^2/\epsilon ^2)\) for learning rank r states using arbitrary and constant-outcome measurements, respectively, in the nonadaptive case. In addition to the sample complexity, a resource of practical significance for learning quantum states is the number of unique measurement settings required (i.e., the number of different measurements used by an algorithm, each possibly with an arbitrary number of outcomes). Motivated by this consideration, we employ concentration of measure of χ 2 -divergence of suitable distributions to extend our lower bounds to the case where the learner performs possibly adaptive measurements from a fixed set of \(\exp (O(d))\) possible measurements. This implies in particular that adaptivity does not give us any advantage using single-copy measurements that are efficiently implementable. We also obtain a similar bound in the case where the goal is to predict the expectation values of a given sequence of observables, a task known as shadow tomography. Finally, in the case of adaptive, single-copy measurements implementable with polynomial-size circuits, we prove that a straightforward strategy based on computing sample means of the given observables is optimal.

Read full abstract
Open Access Icon Open AccessJust Published Icon Just Published
Tight Complexity Bounds for Counting Generalized Dominating Sets in Bounded-Treewidth Graphs Part II: Hardness Results

For a well-studied family of domination-type problems, in bounded-treewidth graphs, we investigate whether it is possible to find faster algorithms. For sets σ , ρ of non-negative integers, a ( σ , ρ )-set of a graph G is a set S of vertices such that | N ( u )∩ S | ∈ σ for every u ∈ S , and | N ( v )∩ S | ∈ ρ for every \(v\not\in S \) . The problem of finding a ( σ , ρ )-set (of a certain size) unifies common problems like Independent Set , Dominating Set , Independent Dominating Set , and many others. In an accompanying paper, it is proven that, for all pairs of finite or cofinite sets ( σ , ρ ), there is an algorithm that counts ( σ , ρ )-sets in time \((c_{\sigma,\rho })^{\sf tw}\cdot n^{{\rm O}(1)} \) (if a tree decomposition of width \({\sf tw} \) is given in the input). Here, c σ , ρ is a constant with an intricate dependency on σ and ρ . Despite this intricacy, we show that the algorithms in the accompanying paper are most likely optimal, i.e., for any pair ( σ , ρ ) of finite or cofinite sets where the problem is non-trivial, and any ε > 0, a \((c_{\sigma,\rho }-\varepsilon)^{\sf tw}\cdot n^{{\rm O}(1)} \) -algorithm counting the number of ( σ , ρ )-sets would violate the Counting Strong Exponential-Time Hypothesis (#SETH). For finite sets σ and ρ , our lower bounds also extend to the decision version, showing that those algorithms are optimal in this setting as well.

Read full abstract