Abstract

Tensor network states (TNS) are a powerful approach for the study of strongly correlated quantum matter. The curse of dimensionality is addressed by parametrizing the many-body state in terms of a network of partially contracted tensors. These tensors form a substantially reduced set of effective degrees of freedom. In practical algorithms, functionals like energy expectation values or overlaps are optimized over certain sets of TNS. Concerning algorithmic stability, it is important whether the considered sets are closed because, otherwise, the algorithms may approach a boundary point that is outside the TNS set and tensor elements diverge. We discuss the closedness and geometries of TNS sets, and we propose regularizations for optimization problems on non-closed TNS sets. We show that sets of matrix product states (MPS) with open boundary conditions, tree tensor network states, and the multiscale entanglement renormalization ansatz are always closed, whereas sets of translation-invariant MPS with periodic boundary conditions (PBC), heterogeneous MPS with PBC, and projected entangled pair states are generally not closed. The latter is done using explicit examples like the W state, states that we call two-domain states, and fine-grained versions thereof.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call