Abstract

Abstract It is widely recognized that computational theories of learning must posit the existence of a priori constraints on hypothesis selection. The present article surveys the theoretical options available for modelling the dynamic process whereby the constraints have their effect. According to the ‘simplicity’ theory (exemplified by Fodor's treatment), hypotheses are preference‐ordered in terms of their syntactic or semantic properties. It is argued that the same explanatory power can be obtained with a weaker (hence better) theory, the ‘minimalist’ theory, which dispenses with the preference ordering. According to the ‘finitistic’ theory, the learner is capable of generating only finitely many hypotheses for evaluation. Chomsky maintains that the occurrence of errorless learning in language acquisition necessitates a finististic explanation. Once again, there is a weaker theory that explains the same data. Finally, Goodman's argument to the effect that there cannot be a computational theory of learni...

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call