Abstract
We investigate learning of classes of distributions over a discrete domain in a PAC context. We introduce two paradigms of PAC learning, namely absolute PAC learning, which is independent of the representation of the class of hypotheses, and PAC learning wrt the indexes, which heavily depends on such representations. We characterize non-computable learnability in both contexts. Then we investigate efficient learning strategies which are simulated by a polynomial-time Turing machine. One strategy is the frequentist one. According to this strategy, the learner conjectures a hypothesis which is as close as possible to the distribution given by the frequency relative to the examples. We characterize the classes of distributions which are absolutely PAC learnable by means of this strategy, and we relate frequentist learning wrt the indexes to the NP=RP problem. Finally, we present another strategy for learning wrt the indexes, namely learning by tests.
Published Version (Free)
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.