Abstract

In the Parsing-as-Constraint-Solving model of language processing, grammar syntax is described modularly through independent constraints among direct constituents of a phrase - constraints such as: “in verb phrases, a verb must precede its complements”, or “in noun phrases, a noun requires a determiner”. Parsing reduces to verifying the constraints relevant to an input phrase, but instead of the typical hierarchical (i.e., parse tree) representations of a successful parse (and also typical complete silence upon unsuccessful parses), the main result is a list of satisfied constraints, and if the input is not totally conforming, also a list of unsatisfied constraints. The latter can serve various purposes beyond plain parsing, such as guiding the correction of any imperfections found in the input- and we can still construct a parse tree if needed, as a side effect. While almost purely syntax-based, the Parsing-as-Constraint-Solving model lends itself well to accommodating interactions with other levels of analysis. These however have been little explored. In this position paper we discuss how to extend this model to incorporate semantic information, in particular from ontologies, and with particular guidance from unsatisfied constraints. This departs from more typical constraint-solving schemes, where failed constraints are simply listed and do not actively contribute to the parse. By giving failed constraints a more active role, we can arrive at more precise analyses and at more appropriate corrections of flawed input. Because even not totally conforming sentences can be more precisely parsed, we gain in expressivity with respect to both the classical, strictly stratified approach to NLP, and the less precise and less reliable statistically-based methods.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call