Abstract
So-called “distributional” language models have become dominant in research on the computational modelling of lexical semantics. This paper investigates how well such models perform on Ancient Greek, a highly inflected historical language. It compares several ways of computing such distributional models on the basis of various context features (including both bag-of-words features and syntactic dependencies). The performance is assessed by evaluating how well these models are able to retrieve semantically similar words to a given target word, both on a benchmark we designed ourselves as well as on several independent benchmarks. It finds that dependency features are particularly useful to calculate distributional vectors for Ancient Greek (although the level of granularity that these dependency features should have is still open to discussion) and discusses possible ways for further improvement, including addressing problems related to polysemy and genre differences.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.