Abstract

Computational complexity studies the intrinsic difficulty of solving mathematically posed problems. Discrete computational complexity studies discrete problems and often uses the Turing machine model of computation. Continuous computational complexity studies continuous problems and tends to use the real number model. Continuous computational complexity may be split into two branches. The first deals with problems for which the information is complete. Informally, information may be complete for problems which are specified by a finite number of inputs. Examples include matrix multiplication, solving linear systems or systems of polynomial equations. We mention two specific results. The first is for matrix multiplication of two real n x n matrices. The trivial lower bound on the complexity is of order n 2 , whereas the best known upper bound is of order n 2.376 as proven by D. Coppersmith and S. Winograd. The actual complexity of matrix multiplication is still unknown. The second result is for the problem of deciding whether a system of n real polynomials of degree 4 has a real root. This problem is NP-complete over the reals as proven by L. Blum, M. Shub and S. Smale. The other branch of continuous computational complexity is IBC, information-based complexity. Typically, IBC studies infinite-dimensional problems for which the input is an element of an infinite-dimensional space. Examples of such inputs include multivariate functions on the reals. Information is often given as function values at finitely many points. Therefore information is partial and the original problem can be solved only approzimately. The goal of IBC is to compute such an approximation as inexpensively as possible. The error and the cost of approximation can be defined in different settings including the worst case, average case, probabilistic, randomized and mixed settings. In the second part of the talk we concentrate on multivariate problems. By a multivariate problem we mean an approximation of a linear or nonlinear operator defined on functions of d variables. We wish to compute an e-approximation with minimal cost. We are particularly interested in large d and/or in large 1/e. Typical examples of such problems are multivariate integration and approximation as well as multivariate integral equations and global optimization. Many multivariate problems are intractable in the worst case deterministic setting, i.e., their complexity grows exponentially with the number d of variables. This is sometimes called the curse of dimension. This holds for multivariate integration for the Korobov class of functions as proven in our recent paper with Ian Sloan. The exponential dependence on dimension d is a complexity result and one cannot get around it by designing clever algorithms. To break the curse of dimension of the worst case deterministic setting we have to settle for a weaker assurance. One way is to settle for a randomized setting or average case setting. In the randomized setting, it is well known that the classical Monte Carlo algorithm breaks the curse of dimension for multivariate integration. However, there are problems which suffer the curse of dimension also in the randomized setting. An example is provided by multivariate approximation, In the average case setting, the curse of dimension is broken for multivariate integration independently of what is a probability measure on the class of functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call