Quantum Automating TC0-Frege Is LWE-Hard

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract We prove the first hardness results against efficient proof search by quantum algorithms. We show that under Learning with Errors (LWE), the standard lattice-based cryptographic assumption, no quantum algorithm can weakly automate $${\rm TC}^0$$ TC 0 -Frege. This extends the line of results of Krajííček and Pudlík( Information and Computation , 1998), Bonet, Pitassi, and Raz ( SIAM Journal on Computing , 2000),and Bonet, Domingo, Gavaldá, Maciel, and Pitassi ( Computational Complexity, 2004 ), who showed that ExtendedFrege, $${\rm TC}^0$$ TC 0 -Frege and $${\rm AC}^0$$ AC 0 -Frege, respectively, cannot be weakly automated by classical algorithms if either the RSA cryptosystem or the Diffie-Hellman key exchange protocol are secure. To the best of our knowledge, this is the first interaction between quantum computation and propositional proof search.

Similar Papers
  • Research Article
  • 10.7916/d8gf11gx
Quantum algorithms and complexity for numerical problems
  • Jan 1, 2011
  • Joseph F Traub + 1 more

Quantum computing has attracted a lot of attention in different research fields, such as mathematics, physics and computer science. Quantum algorithms can solve certain problems significantly faster than classical algorithms. There are many numerical problems, especially those arising from quantum systems, which are notoriously difficult to solve using classical computers, since the computational time required often scales exponentially with the size of the problem. However, quantum computers have the potential to solve these problems efficiently, which is also one of the founding ideas of the field of quantum computing. In this thesis, we explore five different computational problems, designing innovative quantum algorithms and studying their computational complexity. First, we design an adiabatic quantum algorithm for the counting problem, i.e., approximating the proportion α, of the marked items in a given database. As the quantum system undergoes a designed cyclic adiabatic evolution, it acquires a Berry phase 2πα. By estimating the Berry phase, we can approximate α, and solve the problem. For an error bound e, the algorithm can solve the problem with cost of order e−3/2 , which is not as good as the optimal algorithm in the quantum circuit model, but better than the classical random algorithm. Moreover, since the Berry phase is a purely geometric feature, the result should be robust to decoherence and resilient to certain kinds of noise. Since the counting problem is the foundation of many other numerical problems, such as high-dimensional integration and path integration, our adiabatic algorithms can be directly generalized to these kinds of problems. In addition, we study the quantum PAC learning model, offering an improved lower bound on the query complexity. For a concept class with d-VC dimension, the lower bound is Ω(e−1( d1−η + log(1/δ))), where e is the required error bound, δ is the maximal failure possibility and η can be an arbitrarily small positive number. The lower bound is close to the best lower bound on query complexity known for the classical PAC learning model, which is Ω(e−1(d + log(1/δ))). We also study the algorithms and the cost of simulating a system evolving with Hamiltonian H = j=1 m Hj, where the evolution of H j can be implemented efficiently. We consider high order splitting methods that are particularly applicable in quantum simulation and obtain bounds on the number of exponentials required to approximate e −iHt with error e. Moreover, we derive the optimal order of convergence, given e and the cost of the resulting algorithm. We compare our complexity estimates to previously known ones and show the resulting speedup. Furthermore, we consider randomized algorithms for simulating the evolution of Hamiltonian H. The evolution is simulated by a product of exponentials of Hj in a random sequence and random evolution times. Hence the final state of the system is approximated by a mixed quantum state. First we provide a scheme to bound the error of the final quantum state in a randomized algorithm. Then we obtain randomized algorithms which have the same efficiency as certain deterministic algorithms but which are simpler to implement. Finally we provide a lower bound on the number of exponentials for both deterministic and randomized algorithms, when the evolution time is required to be positive. We also apply the improved upper bound of Hamiltonian simulation in estimating the ground state energy of a multiparticle system withrelative error e, which is also known as the multivariate Sturm-Liouville eigenvalue problem. Since the cost of this problem grows exponentially with the number of particles using deterministic classical algorithms, it suffers from the curse of dimensionality. Quantum computers can vanquish the curse, and we exhibit a quantum algorithm that achieves relative error e using O(d log e−1) qubits with total cost (number of quantum queries and other quantum operations) O(d e −(3+δ)), where δ > 0 is arbitrarily small. Thus, the number of qubits and the total cost are linear in the number of particles. The main result of Chapter 2 is based on the paper [127], published in Quantum Information Proceeding. The result of Chapter 3 is the same as that of the paper [126], published in Information Processing Letters. The results of Chapter 4 and Chapter 6 have been submitted, and can be found in [88] and [84] separately. Chapter 5 from a talk in the 9th International Conference on Monte Carlo and Quasi-Monte Carlo Methods in Scientific Computing, has also been submitted and can be found in [125].

  • PDF Download Icon
  • Research Article
  • 10.30837/rt.2020.3.202.07
Threat models for asymmetric cryptotransformations of the promising electronic signature
  • Sep 16, 2020
  • Radiotekhnika
  • Ю.І Горбенко + 3 more

The paper considers the concept of a threat model, presents the results of substantiation and development of proposals for building a threat model for asymmetric cryptotransformations such as a promising electronic signature (ES), which can be used in the post-quantum period. The generalized models of threats concerning perspective ES are stated in detail and their estimation is given. Threat models for promising ES using classical and quantum cryptanalysis methods and tools, threat models for synthesis and application of ES in general, as well as threat models for synthesis and application of ES in the post-quantum period are proposed. A list of threats is identified based on the results of the analysis of the methods of synthesis and application of known and promising ES. Proposals are formulated for a list of threats for which protection should be provided. The list of threats is determined using the IT-Grundschutz Catalogues of the German database, and based on this a threat model is formed. It is determined that the threats to the use of classical cryptanalysis in the synthesis and application of EP must be identified in detail unconditionally. The main threats (methods) of classical cryptanalysis that must be taken into account are identified. Possible variants of side channel attacks are considered. The main threats (attacks) using quantum mathematical methods that can be implemented on a quantum computer (of course, if it is built). A comparative analysis of the complexity of factorization for classical and quantum algorithms, as well as a comparative analysis of the complexity of the algorithm of discrete logarithm in a finite field based on the sieve of a numerical field and the Shore algorithm are given. Threats (attacks) are considered on the example of the problem of stability of cryptotransformations based on learning with errors (LWE). In general, attacks on LWE can be divided into 2 major classes – attacks based on bust and attacks based on lattice reduce. Preliminary analysis allows us to conclude that modern versions of LWE mechanisms are based on polynomial rings.

  • Conference Article
  • Cite Count Icon 5
  • 10.1109/qce52317.2021.00027
Simpler (Classical) and Faster (Quantum) Algorithms for Gibbs Partition Functions
  • Oct 1, 2021
  • Srinivasan Arunachalam + 4 more

We give classical and quantum algorithms for approximating partition functions of classical Hamiltonians at a given temperature. Specifically, we modify the classical algorithm of Štefankovič, Vempala and Vigoda (J. ACM, 56(3), 2009) to improve its sample complexity; and we quantize this new algorithm, improving upon the previously best quantum algorithm for computing Gibbs partition functions due to Harrow and Wei (SODA 2020).The conventional approach to estimate partition functions requires approximating the mean of Gibbs distributions at nearby inverse temperatures that satisfy certain properties; this set of temperatures is called a cooling schedule. The length of the cooling schedule directly affects the complexity of the algorithm. Combining our improved version of the algorithm of Štefankovič, Vempala and Vigoda with the paired-product estimator of Huber (Ann. Appl. Probab., 25(2), 2015), our new quantum algorithm uses a shorter cooling schedule than previously known. This length matches the optimal length conjectured by Štefankovič, Vempala and Vigoda. The quantum algorithm also achieves a quadratic advantage in the number of required quantum samples compared to the number of random samples drawn by the best classical algorithm, and its computational complexity has quadratically better dependence on the spectral gap of the Markov chains used to produce the quantum samples.

  • Conference Article
  • Cite Count Icon 12
  • 10.1049/ic:19970792
Complexity and algorithms in quantum computation
  • Jan 1, 1997
  • R Jozsa

In 1982 Feynman noted that the simulation of a quantum process on a classical computer appears to generally involve an exponential slowdown in running time compared to the evolution of the process itself. More recently in quantum computation this effect has been turned around and exploited for computational advantage by formulating quantum processes (quantum algorithms) whose evolution corresponds to the performance of useful computational tasks. Thus the quantum algorithm will engender an exponential speedup in computing time. The most famous such quantum algorithm is Shor's algorithm for factorising whole numbers. It factorises a number having d digits in a time of order less than d whereas no known classical algorithm can perform factorisation in a time bounded by any polynomial in d. In theoretical computer science, computational complexity is generally characterised by classifying computational tasks into complexity classes such as P, BPP and NP. It therefore appears that quantum computers will be able to transgress some of the boundaries set by these classes. (2 pages)

  • Book Chapter
  • Cite Count Icon 3
  • 10.1007/978-3-030-32520-6_34
Quantum Computer Search Algorithms: Can We Outperform the Classical Search Algorithms?
  • Oct 13, 2019
  • Avery Leider + 3 more

Quantum Computers are not limited to just two states. Qubits, the basic unit of quantum computing have the power to exist in more than one state at a time. While the classical computers only perform operations by manipulation of classical bits having two values 0 and 1, quantum bits can represent data in multiple states. This property of inheriting multiple states at a time is called superposition which gives quantum computers tremendous power over classical computers. With this power, the algorithms designed on quantum computers to solve search queries can yield result significantly faster than the classical algorithms. There are four types of problems that exist: Polynomial (P), Non-Deterministic Polynomial (NP), Non-Deterministic Polynomial Complete (NP-complete) and Non-Deterministic Polynomial hard (NP-hard). P problems can be solved in the polynomial amount of time like searching a database for an item. However, when the size of the search space grows, it becomes difficult to compute solutions even for P problems. Quantum algorithms like Grover’s algorithm has reduced the time complexity of some of the classical algorithm problems from N to \(\sqrt{N}\). Variants of Grover’s algorithm like Quantum Partial Search propose changes that yield not exact but closer results in time even lesser than Grover’s algorithm. NP problems are the problems whose solution if known can be verified in polynomial amount time. Factorization of prime numbers which is considered to be an NP problem took an exponential amount of time when solved using the classical computer while the Shor’s quantum computing algorithm computes it in polynomial time. Factorization is also a class of bounded-error quantum polynomial time (BQP) problems which are decision problems solved by quantum computers in polynomial time. There are problems to which if a solution is found can solve every problem Of NP class, these are NP-complete problems. The power of Qubits could be exploited in the future to come up with solutions for NP-complete problems in the future.

  • Research Article
  • Cite Count Icon 1
  • 10.54097/hset.v38i.5831
Comparison of Quantum and Classical Algorithm in Searching a Number in a Database Case
  • Mar 16, 2023
  • Highlights in Science, Engineering and Technology
  • Zhiyao Wang

Contemporarily, quantum computing is one of the hottest research fields. Many quantum algorithms are proposed in order to utilize the power of quantum computers. Grover’s searching algorithm is one of them. In this article, by comparing a classical searching algorithm and Grover’s algorithm in the problem of finding a number in a finite database, the advantages of the latter are discussed. The actual quantum circuit to solve the problem is built and run on both a simulator and a real quantum computer. According to the analysis, Grover’s algorithm provides speedup in a searching task compared to the classical algorithm. However, noises in today’s quantum devices make the result of the quantum algorithm unreliable. In searching for multiple numbers, Grover’s algorithm has its shortcomings. Nevertheless, noises in quantum computing need to be addressed in order to utilize the potential of quantum computers in solving difficult problems. These results shed light on guiding further exploration of quantum algorithms and quantum computing.

  • Research Article
  • Cite Count Icon 3
  • 10.1016/j.future.2024.107480
Quantum resource estimation for large scale quantum algorithms
  • Aug 12, 2024
  • Future Generation Computer Systems
  • Vlad Gheorghiu + 1 more

Quantum algorithms are often represented in terms of quantum circuits operating on ideal (logical) qubits. However, the practical implementation of these algorithms poses significant challenges. Many quantum algorithms require a substantial number of logical qubits, and the inherent susceptibility to errors of quantum computers require quantum error correction. The integration of error correction introduces overhead in terms of both space (physical qubits required) and runtime (how long the algorithm needs to be run for). This paper addresses the complexity of comparing classical and quantum algorithms, primarily stemming from the additional quantum error correction overhead. We propose a comprehensive framework that facilitates a direct and meaningful comparison between classical and quantum algorithms. By acknowledging and addressing the challenges introduced by quantum error correction, our framework aims to provide a clearer understanding of the comparative performance of classical and quantum computing approaches. This work contributes to understanding the practical viability and potential advantages of quantum algorithms in real-world applications.We apply our framework to quantum cryptanalysis, since it is well known that quantum algorithms can break factoring and discrete logarithm based cryptography and weaken symmetric cryptography and hash functions. In order to estimate the real-world impact of these attacks, apart from tracking the development of fault-tolerant quantum computers it is important to have an estimate of the resources needed to implement these quantum attacks. This analysis provides state-of-the art snap-shot estimates of the realistic costs of implementing quantum attacks on these important cryptographic algorithms, assuming quantum fault-tolerance is achieved using surface code methods, and spanning a range of potential error rates. These estimates serve as a guide for gauging the realistic impact of these algorithms and for benchmarking the impact of future advances in quantum algorithms, circuit synthesis and optimization, fault-tolerance methods and physical error rates.

  • Research Article
  • Cite Count Icon 6
  • 10.1016/j.sasc.2024.200118
Quantum maximum power point tracking (QMPPT) for optimal solar energy extraction
  • Jul 3, 2024
  • Systems and Soft Computing
  • Habib Feraoun + 5 more

Quantum maximum power point tracking (QMPPT) for optimal solar energy extraction

  • Book Chapter
  • 10.1093/oso/9780198570004.003.0012
Quantum Computational Complexity Theory and Lower Bounds
  • Nov 16, 2006
  • Phillip Kaye + 2 more

We have seen in the previous chapters that quantum computers seem to be more powerful than classical computers for certain problems. There are limits on the power of quantum computers, however. Since a classical computer can simulate a quantum computer, a quantum computer can only compute the same set of functions that a classical computer can. The advantage of using a quantum computer is that the amount of resources needed by a quantum algorithm might be much less than what is needed by the best classical algorithm. In Section 9.1 we briefly define some classical and quantum complexity classes and give some relationships between them. Most of the interesting questions relating classical and quantum complexity classes remain open. For example, we do not yet know if a quantum computer is capable of efficiently solving an NP-complete problem (defined later). One can prove upper bounds on the difficulty of a problem by providing an algorithm that solves that problem, and proving that it will work within in a given running time. But how does one prove a lower bound on the computational complexity of a problem? For example, if we wish to find the product of two n-bit numbers, computing the answer requires outputting roughly 2n bits and that requires Ω(n) steps (in any computing model with finite-sized gates). The best-known upper bound for integer multiplication is O(n log n log log n) steps. It has proved extremely difficult to derive non-trivial lower bounds on the computational complexity of a problem. Most of the known non-trivial lower bounds are in the ‘black-box’ model (for both classical and quantum computing), where we only query the input via a ‘black-box’ of a specific form. We discuss the black-box model in more detail in Section 9.2. We then sketch several approaches for proving black-box lower bounds. The first technique has been called the ‘hybrid method’ and was used to prove that quantum searching requires Ω(√n) queries to succeed with constant probability. The second technique is called the ‘polynomial method’. We then describe a technique based on ‘block sensitivity’, and conclude with a technique known as the ‘adversary method’.

  • Research Article
  • Cite Count Icon 25
  • 10.1002/1099-0526(200009/10)6:1<35::aid-cplx1005>3.0.co;2-t
Reflections on quantum computing
  • Sep 1, 2000
  • Complexity
  • Christian S Calude + 2 more

Reflections on quantum computing

  • Conference Article
  • Cite Count Icon 3
  • 10.4230/lipics.stacs.2021.4
Improved (Provable) Algorithms for the Shortest Vector Problem via Bounded Distance Decoding
  • Mar 2, 2021
  • Divesh Aggarwal + 3 more

The most important computational problem on lattices is the Shortest Vector Problem (SVP). In this paper, we present new algorithms that improve the state-of-the-art for provable classical/quantum algorithms for SVP. We present the following results. 1) A new algorithm for SVP that provides a smooth tradeoff between time complexity and memory requirement. For any positive integer 4 ≤ q ≤ √n, our algorithm takes q^{13n+o(n)} time and requires poly(n)⋅ q^{16n/q²} memory. This tradeoff which ranges from enumeration (q = √n) to sieving (q constant), is a consequence of a new time-memory tradeoff for Discrete Gaussian sampling above the smoothing parameter. 2) A quantum algorithm that runs in time 2^{0.9533n+o(n)} and requires 2^{0.5n+o(n)} classical memory and poly(n) qubits. This improves over the previously fastest classical (which is also the fastest quantum) algorithm due to [Divesh Aggarwal et al., 2015] that has a time and space complexity 2^{n+o(n)}. 3) A classical algorithm for SVP that runs in time 2^{1.741n+o(n)} time and 2^{0.5n+o(n)} space. This improves over an algorithm of [Yanlin Chen et al., 2018] that has the same space complexity. The time complexity of our classical and quantum algorithms are expressed using a quantity related to the kissing number of a lattice. A known upper bound of this quantity is 2^{0.402n}, but in practice for most lattices, it can be much smaller and even 2^o(n). In that case, our classical algorithm runs in time 2^{1.292n} and our quantum algorithm runs in time 2^{0.750n}.

  • Conference Article
  • Cite Count Icon 2
  • 10.1109/isec52395.2021.9764017
Comparing Grover’s Quantum Search Algorithm with Classical Algorithm on Solving Satisfiability Problem
  • Mar 13, 2021
  • Runqian Wang

The emergence of quantum computing provides us the possibility of solving tasks that might take years classically in just a few minutes. For certain problems, quantum computing exhibits quantum supremacy, meaning that the quantum solution runs exponentially faster than classical algorithms and is able to completely take over classical computers. This high efficiency of quantum computing comes not only from the hardware but also the software, quantum algorithms. The algorithms utilize the qubits to make calculations in order to fulfill specific tasks with the lowest time complexity possible. One such algorithm is named the Grover’s algorithm, which is able to perform database search in $\mathcal{O}(\sqrt{N})$, and it runs much faster than the traditional algorithm that takes $\mathcal{O}(N)$ time to solve the same task. For example, when the task is to find the even integers from N integers, traditional computation will need to run through all of the N integers one by one, making at least N steps of calculation, while by using Grover’s algorithm only around $\sqrt{N}$ calculations are needed. This exponential speed-up makes Grover’s algorithm one of the most important quantum algorithms. Grover’s algorithm has a wide application in many fields and is able to improve the time complexity exponentially. One task that can be solved using Grover’s algorithm is the satisfiability problem. This type of problem asks the computer to find a set of values (commonly true or false) for several variables such that they satisfy certain constraints. We use k-SAT problems to refer to satisfiability problems with k boolean variables to be determined. Grover’s algorithm can effectively solve the k-SAT problem by performing the database search on $2 ^{N}$ possible states of the variables. The algorithm’s square root optimization on searching helps to improve the efficiency of this solution significantly. Furthermore, this optimization of Grover’s algorithm may play a more important role when k grows larger, and consequently the efficiency of the quantum solution could improve faster relative to the traditional solution. Yet this hypothesis is never tested due to the lack of a general k-SAT quantum algorithm. No quantum algorithms solving k-SAT problems where k is greater than 3 have been proposed, thus no test has been performed to compare the quantum solution and the classical solution on more general k-SAT problems. In this research, we formulate a general quantum solution for k-SAT problem and compare such solution with the best classical algorithm to determine whether and when the quantum algorithm performs better on satisfiability problems. The comparison will be done through both theoretical deduction as well as real-world implementation. At the end of this research, we will determine whether the proposed quantum algorithm outperforms the classical algorithm on solving k-satisfiability problems.

  • Research Article
  • Cite Count Icon 4
  • 10.1016/j.amc.2011.06.057
De-quantisation of the quantum Fourier transform
  • Jul 23, 2011
  • Applied Mathematics and Computation
  • Alastair A Abbott

De-quantisation of the quantum Fourier transform

  • Research Article
  • Cite Count Icon 12
  • 10.4204/eptcs.26.1
Understanding the Quantum Computational Speed-up via De-quantisation
  • Jun 9, 2010
  • Electronic Proceedings in Theoretical Computer Science
  • Alastair A Abbott + 1 more

While it seems possible that quantum computers may allow for algorithms offering a computational speed-up over classical algorithms for some problems, the issue is poorly understood. We explore this computational speed-up by investigating the ability to de-quantise quantum algorithms into classical simulations of the algorithms which are as efficient in both time and space as the original quantum algorithms. The process of de-quantisation helps formulate conditions to determine if a quantum algorithm provides a real speed-up over classical algorithms. These conditions can be used to develop new quantum algorithms more effectively (by avoiding features that could allow the algorithm to be efficiently classically simulated), as well as providing the potential to create new classical algorithms (by using features which have proved valuable for quantum algorithms). Results on many different methods of de-quantisations are presented, as well as a general formal definition of de-quantisation. De-quantisations employing higher-dimensional classical bits, as well as those using matrix-simulations, put emphasis on entanglement in quantum algorithms; a key result is that any algorithm in which the entanglement is bounded is de-quantisable. These methods are contrasted with the stabiliser formalism de-quantisations due to the Gottesman-Knill Theorem, as well as those which take advantage of the topology of the circuit for a quantum algorithm. The benefits of the different methods are contrasted, and the importance of a range of techniques is emphasised. We further discuss some features of quantum algorithms which current de-quantisation methods do not cover.

  • Research Article
  • Cite Count Icon 3
  • 10.1017/s0960129507006366
Complexity of chaos and quantum computation
  • Dec 1, 2007
  • Mathematical Structures in Computer Science
  • Bertrand Georgeot

This paper reviews recent work related to the interplay between quantum information and computation on the one hand and classical and quantum chaos on the other. First, we present several models of quantum chaos that can be simulated efficiently on a quantum computer. Then a discussion of information extraction shows that such models can give rise to complete algorithms including measurements that can achieve an increase in speed compared with classical computation. It is also shown that models of classical chaos can be simulated efficiently on a quantum computer, and again information can be extracted efficiently from the final wave function. The total gain can be exponential or polynomial, depending on the model chosen and the observable measured. The simulation of such systems is also economical in the number of qubits, allowing implementation on present-day quantum computers, some of these algorithms having been already experimentally implemented. The second topic considered concerns the analysis of errors on quantum computers. It is shown that quantum chaos algorithms can be used to explore the effect of errors on quantum algorithms, such as random unitary errors or dissipative errors. Furthermore, the tools of quantum chaos allows a direct analysis of the effects of static errors on quantum computers. Finally, we consider the different resources used by quantum information, and show that quantum chaos has some precise consequences on entanglement generation, which becomes close to maximal. For another resource, interference, a proposal is presented for quantifying it, enabling a discussion on entanglement and interference generation in quantum algorithms.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.