Abstract

Computational lexicology is evolving around a particular model of lexical acquisition, based on a transition that involves structuring — or at least restructuring — existing on-line lexical resources (dictionaries and corpora) so that they can be used in the creation of a central repository of lexical data (a lexical knowledge base). We discuss some methodological issues related to this process, with respect to currently held assumptions about the nature of lexical information. We argue that current models of lexical knowledge bases are impoverished. Specifically, they are unable to handle certain types of linguistic generalizations which are an essential component of lexical knowledge. We then sketch, in light of a set of functional requirements for a lexical knowledge base, an improved representational model for this kind of knowledge; review some assumptions underlying extracting information from machine-readable dictionaries; and draw conclusions concerning their proper place in the process of lexicon acquisition.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call