Abstract

Monotonic and dual monotonic language learning from positive as well as from positive and negative examples is investigated. Three different notions of monotonicity are considered. Each of them reflects an alternative formalization of the requirement that the learner has to produce better and better generalizations when fed more and more data on the concept to be learned. Strong-monotonicity absolutely requires that only better and better generalizations be produced. Monotonic learning reflects the demand that for any two guesses the one output later has to be, with respect to the target language, at least as good as the earlier one. Weak-monotonicity is the analogue in learning theory of cumulativity. The corresponding three versions of dual monotonicity describe the requirement that the inference device only produces specializations that fit the target language better and better. Dual strong-monotonic learning generates a chain of shrinking specializations converging to the target language. Dual monotonicity describes the same goal with respect to the target language and dual weak-monotonic learning is the analogue of the dual of cumulativity. The power of each of these types of monotonic and dual monotonic inference from positive as well as from positive and negative data in the context of algorithmic language learning theory is completely investigated, thereby obtaining strong hierarchies.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.