Abstract
Natural Language Generation systems usually require substantial knowledge about the structure of the target language in order to perform the final task in the generation process – the mapping from semantic representation to text known as surface realisation. Designing knowledge bases of this kind, typically represented as sets of grammar rules, may however become a costly, labour-intensive enterprise. In this work we take a statistical approach to surface realisation in which no linguistic knowledge is hard-coded, but rather trained automatically from large corpora. Results of a small experiment in the generation of referring expressions show significant levels of similarity between our (computer-generated) text and those produced by humans, besides the usual benefits commonly associated with statistical NLP such as low development costs, domain- and language-independency.KeywordsSemantic PropertySurface RealisationDefinite DescriptionStatistical Machine TranslationGrammar RuleThese keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.