Abstract

AbstractA long-standing problem in linguistics is how to defineword. Recent research has focused on the incompatibility of diverse definitions, and the challenge of finding a definition that is crosslinguistically applicable. In this study I take a different approach, asking whether one structure is more word-like than another based on the concepts of predictability and information. I hypothesize that word constructions tend to be more “internally predictable” than phrase constructions, where internal predictability is the degree to which the entropy of one constructional element is reduced by mutual information with another element. I illustrate the method with case studies of complex verbs in German and Murrinhpatha, comparing verbs with selectionally restricted elements against those built from free elements. I propose that this method identifies an important mathematical property of many word-like structures, though I do not expect that it will solve all the problems of wordhood.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call