Abstract
Introduced is a new inductive inference paradigm, Dynamic Modeling. Within this learning paradigm, for example, function hlearnsfunction giff, in the i-th iteration, hand gboth produce output, hgets the sequence of all outputs from gin prior iterations as input, ggets all the outputs from hin prior iterations as input, and, from some iteration on, the sequence of h's outputs will be programs forthe output sequenceof g. Dynamic Modeling provides an idealization of, for example, a social interaction in which hseeks to discover program models of g's behavior it sees in interacting with g, and hopenlydiscloses to gits sequence of candidate program models to see what gsays back. Sampleresults: every gcan be so learned by some h; there are gthat can only be learned by an hif gcan also learn that hback; there are extremely secretive hwhich cannot be learned back by any gthey learn, but which, nonetheless, succeed in learning infinitely many g; quadratictime learnablity is strictly more powerful than lintime learnablity. This latter result, as well as others, follow immediately from general correspondence theorems obtained from a unifiedapproach to the paradigms within inductive inference. Many proofs, some sophisticated, employ machine self-reference, a.k.a., recursion theorems.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.