Abstract
What kinds of functions are learnable from their satisfying assignments? Motivated by this simple question, we extend the framework of [DDS15], which studied the learnability of probability distributions over {0, 1}n defined by the set of satisfying assignments to “low-complexity” Boolean functions, to Boolean-valued functions defined over continuous domains. In our learning scenario there is a known “background distribution” over ℝn (such as a known normal distribution or a known log-concave distribution) and the learner is given i.i.d. samples drawn from a target distribution f, where f is restricted to the satisfying assignments of an unknown low-complexity Boolean-valued function f. The problem is to learn an approximation ′ of the target distribution f which has small error as measured in total variation distance. We give a range of efficient algorithms and hardness results for this problem, focusing on the case when f is a low-degree polynomial threshold function (PTF). When the background distribution is log-concave, we show that this learning problem is efficiently solvable for degree-1 PTFs (i.e., linear threshold functions) but not for degree-2 PTFs. In contrast, when is a normal distribution, we show that this learning problem is efficiently solvable for degree-2 PTFs but not for degree-4 PTFs. Our hardness results rely on standard assumptions about secure signature schemes.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.