Abstract

One of the most important paradigms in the inductive inference literature is that of robust learning. This paper adapts and investigates the paradigm of robust learning to learning languages from positive data. Broadening the scope of that paradigm is important: robustness captures a form of invariance of learnability under admissible transformations on the object of study; hence, it is a very desirable property. The key to defining robust learning of languages is to impose that the latter be automatic, that is, recognisable by a finite automaton. The invariance property used to capture robustness can then naturally be defined in terms of first-order definable operators, called translators. For several learning criteria amongst a selection of learning criteria investigated either in the literature on explanatory learning from positive data or in the literature on query learning, we characterise the classes of languages all of whose translations are learnable under that criterion.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call