Abstract
Statistical language models encapsulate varied information, both grammatical and semantic, present in a language. This paper investigates various techniques for overcoming the difficulties in modelling highly inflected languages. The main problem is a large set of different words. We propose to model the grammatical and semantic information of words separately by splitting them into stems and endings. All the information is handled within a data-driven formalism. Grammatical information is well modelled by using short-term dependencies. This article is primarily concerned with the modelling of semantic information diffused through the entire text. It is presumed that the language being modelled is homogeneous in topic. The training corpus, which is very topically heterogeneous, is divided into three semantic levels based on topic similarity with the target environment text. Text on each semantic level is used as training text for one component of a mixture model. A document is defined as a basic unit of a training corpus, which is semantically homogeneous. The similarity of topic between a document and a collection of target environment texts is determined by the cosine vector similarity function and TFIDF weighting heuristic. The crucial question in the case of highly inflected languages is how to define terms. Terms are defined as clusters of words. Clustering is based on approximate string matching. We experimented with Levenshtein distance and Ratcliff/Obershelp similarity measure, both in combination with ending-stripping. Experiments on the Slovenian language were performed on a corpus of VEČER newswire text. The results show a significant reduction in OOV rate and perplexity.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.