Abstract

We study the L1-approximation of d-variate monotone functions based on information from n function evaluations. It is known that this problem suffers from the curse of dimensionality in the deterministic setting, that is, the number n(ε,d) of function evaluations needed in order to approximate an unknown monotone function within a given error threshold ε grows at least exponentially in d. In the randomized setting (Monte Carlo setting) the complexity n(ε,d) grows exponentially in d (modulo logarithmic terms) only. An algorithm exhibiting this complexity is presented. The problem remains difficult as best methods known are deterministic if ε is comparably small, namely ε⪯1∕d. This inherent difficulty is confirmed by lower complexity bounds which reveal a joint (ε,d)-dependence and from which we deduce that the problem is not weakly tractable. The lower bound proof also has implications on the complexity of learning Boolean monotone functions.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call