Abstract
A team of learning machines is a multiset of learning machines. A team is said to successfully learn a concept just in case each member of some nonempty subset, of predetermined size, of the team learns the concept. Team learning of languages turns out to be a suitable theoretical model for studying computational limits on multi-agent machine learning. Team learning of recursively enumerable languages has been extensively studied. However, it may be argued that from a practical point of view all languages of interest are recursive.This paper gives theoretical results about team learnability of recursive languages. These results are mainly about two issues: redundancy and aggregation. The issue of redundancy deals with the impact of increasing the size of a team and increasing the number of machines required to be successful. The issue of aggregation deals with conditions under which a team may be replaced by a single machine without any loss in learning ability. The learning scenarios considered are:(a) Identification in the limit of accepting grammars for recursive languages. (b) Identification in the limit of decision procedures for recursive languages. (c) Identification in the limit of accepting grammars for indexed families of recursive languages. (d) Identification in the limit of accepting grammars for indexed families with enumerable class of grammars for the family as the hypothesis space. Scenarios which can be modeled by team learning are also presented.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.