Abstract

We study ε-approximation of the solution of the d-variateVolterra problem of the second kind, the Volterra operator having a convolution kernel. The Volterra operator is defined on an arbitrary normed function space Fd that is continuously embedded in the space of square integrable functions defined on the unit d-cube.Admissible information is given by continuous linear functionals on Fd. These functionals might be arbitrarily chosen; alternatively, we may allow only function values. (In the latter case, we can only consider spaces Fd for which function values are well-defined continuous linear functionals.) The error and cost are measured in the worst case or randomized setting in the L2-norm.Our first result is that lower and upper bounds on the information complexity of the Volterra problem and of d-variate L2-approximation differ by at most a constant factor. Using this result, we then show that necessary and sufficient conditions characterizing a given kind of tractability for multivariate approximation also characterize that same kind of tractability for the Volterra problem. We consider different kinds of algebraic tractabilities (in which we compare the information complexity to d and ε−1) and exponential tractability (in which we compare the information complexity to d and 1+lnε−1).However, our comparison of Volterra to approximation falls short in one respect. Multivariate approximation is linear; for many spaces (e.g., Hilbert spaces), the combinatory cost of multivariate approximation is roughly the same as its information complexity. But since the Volterra problem is nonlinear, it is unclear what the combinatory cost will be for the Volterra problem. This means that we do not know the extent to which the combinatory cost will exceed the information complexity. We partially address this issue by seeing whether the Picard iteration can give us an approximation without too great a penalty when we include the combinatory cost; this penalty is measured by the normalized combinatory cost, defined as the ratio of the combinatory cost to the cost of one admissible operation.In particular, suppose that we agree to compute an ε-approximation to the Volterra problem by (randomized) Monte Carlo. Suppose further that the convolution kernels are uniformly bounded in the L∞-norm. We then obtain an upper bound on the normalized combinatory cost of the Monte Carlo algorithm in the randomized setting. This upper bound is larger than the information complexity by roughly a factor of ε−2. This factor does not change positive results for algebraic tractability, but it does affect some of the positive results for exponential tractability.We also describe a deterministic algorithm that implements the Picard iteration. Assume that the approximation problem can be solved by linear algorithms that lie in a space of dimension proportional to the information complexity. We then find that the combinatory cost of the Picard iteration in the worst case setting is at most a power of the information complexity of d-variate approximation. This power is a constant if d−1lnε−1 is uniformly bounded. However, if d−1lnε−1 goes to infinity, then this power is of order (d−1lnε−1)∕ln(d−1lnε−1). Hence if d is large relative to lnε−1, this result is quite positive. On the other hand, for general d and ε, only some kinds of algebraic and exponential tractabilities hold when the normalized combinatory cost is included.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.