Abstract

Tensor networks represent the state-of-the-art in computational methods across many disciplines, including the classical simulation of quantum many-body systems and quantum circuits. Several applications of current interest give rise to tensor networks with irregular geometries. Finding the best possible contraction path for such networks is a central problem, with an exponential effect on computation time and memory footprint. In this work, we implement new randomized protocols that find very high quality contraction paths for arbitrary and large tensor networks. We test our methods on a variety of benchmarks, including the random quantum circuit instances recently implemented on Google quantum chips. We find that the paths obtained can be very close to optimal, and often many orders or magnitude better than the most established approaches. As different underlying geometries suit different methods, we also introduce a hyper-optimization approach, where both the method applied and its algorithmic parameters are tuned during the path finding. The increase in quality of contraction schemes found has significant practical implications for the simulation of quantum many-body systems and particularly for the benchmarking of new quantum chips. Concretely, we estimate a speed-up of over 10,000×compared to the original expectation for the classical simulation of the Sycamore `supremacy' circuits.

Highlights

  • We benchmark our contractors on six classes of tensor networks with complex geometry – random regular graphs, random planar graphs, square lattices, weighted model counting formulae, Quantum Approximate Optimization Algorithm (QAOA) energy computation, and random quantum circuits

  • For OBC, we find W is significantly reduced from the Time Evolving Block Decimation (TEBD)-Exact scaling 1 of 2L (Fig. 6(a)) as well as C (Fig. 6(b))

  • We have introduced heuristic algorithms for the contraction of arbitrary tensor networks that show very good performance across a range of benchmarks

Read more

Summary

Introduction

Since the advent of the density-matrix renormalization group algorithm, invented to study onedimensional lattice systems of quantum degrees of freedom, tensor networks have permeated a plethora of scientific disciplines, finding use in fields such as quantum condensed matter [1,2,3,4], classical statistical mechanics [5,6,7], information science and big-data processing [8, 9], systems engineering [10], quantum computation [11], machine learning and artificial reasoning [12,13,14] and more. The underlying idea of tensor network methods is to use sparse networks of interconnected low-rank tensors to represent data structures that would otherwise be expressed in (very) high-rank tensor form, which is hard to manipulate Due to this ubiquity, techniques to perform (multi)linear algebraic operations on tensor networks accurately and efficiently are very useful to a highly interdisciplinary community of researchers and engineers. Efficient tensor network contraction is possible in special cases in which network topology (e.g., trees), values of tensor entries, or both are restricted [21,22,23,24,25,26] Despite these results, contracting tensor networks with arbitrary structure remains (at least) #Phard in the general case [27, 28]. This is true, in particular, for tensor networks that model ran-

Methods
Results
Conclusion

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.