Abstract

Many computational problems can be formulated in terms of high-dimensional functions. Simple representations of such functions and resulting computations with them typically suffer from the ``curse of dimensionality,'' an exponential cost dependence on dimension. Tensor networks provide a way to represent certain classes of high-dimensional functions with polynomial memory. This results in computations where the exponential cost is ameliorated or, in some cases, removed, if the tensor network representation can be obtained. Here, we introduce a direct mapping from the arithmetic circuit of a function to arithmetic circuit tensor networks, avoiding the need to perform any optimization or functional fit. We demonstrate the power of the circuit construction in examples of multivariable integration on the unit hypercube in up to 50 dimensions, where the complexity of integration can be understood from the circuit structure. We find very favorable cost scaling compared with quasi--Monte Carlo integration for these cases and further give an example where efficient quasi--Monte Carlo integration cannot be performed without knowledge of the underlying tensor network circuit structure.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call