Abstract

Contracting tensor networks is often computationally demanding. Well-designed contraction sequences can dramatically reduce the contraction cost. We explore the performance of simulated annealing and genetic algorithms, two common discrete optimization techniques, to this ordering problem. We benchmark their performance as well as that of the commonly-used greedy search on physically relevant tensor networks. Where computationally feasible, we also compare them with the optimal contraction sequence obtained by an exhaustive search. Furthermore, we present a systematic comparison with state-of-the-art tree decomposition and graph partitioning algorithms in the context of random regular graph tensor networks. We find that the algorithms we consider consistently outperform a greedy search given equal computational resources, with an advantage that scales with tensor network size. We compare the obtained contraction sequences and identify signs of highly non-local optimization, with the more sophisticated algorithms sacrificing run-time early in the contraction for better overall performance.

Highlights

  • Tensor networks are a convenient language for studying the statistics of discrete systems with local interactions

  • For Simulated Annealing and the Genetic Algorithm we show the contraction cost given by equation (6) of the best contraction sequence found as a function of the number of cost function evaluations used

  • We show for comparison a typical hand-crafted contraction sequence similar to the corner transfer matrix method commonly used with projected entangled pair states (PEPS)

Read more

Summary

July 2020

Contracting tensor networks is often computationally demanding. Original Content from annealing and genetic algorithms, two common discrete optimization techniques, to this ordering this work may be used under the terms of the problem. We benchmark their performance as well as that of the commonly-used greedy search on Creative Commons Attribution 4.0 licence. Any further distribution the optimal contraction sequence obtained by an exhaustive search. We compare the obtained contraction sequences and identify signs of highly non-local optimization, with the more sophisticated algorithms sacrificing run-time early in the contraction for better overall performance

Introduction
Tensor network contraction
Algorithms
Numerical experiments
Contraction sequences
Random regular networks
Findings
Conclusions
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call