Abstract

The representation of word meaning in texts is a central problem in Computational Linguistics. Geometrical models represent lexical semantic information in terms of the basic co-occurrences that words establish each other in large-scale text collections. As recent works already address, the definition of methods able to express the meaning of phrases or sentences as operations on lexical representations is a complex problem, and a still largely open issue. In this paper, a perspective centered on Convolution Kernels is discussed and the formulation of a Partial Tree Kernel that integrates syntactic information and lexical generalization is studied. The interaction of such information and the role of different geometrical models is investigated on the question classification task where the state-of-the-art result is achieved.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call