Abstract
Some have argued that word orders which are more difficult to process should be rarer cross-linguistically. Our current study fails to replicate the results of Maurits, Navarro, and Perfors (2010), who used an entropy-based Uniform Information Density (UID) measure to moderately predict the Greenbergian typology of transitive word orders. We additionally report an inability of three measures of processing difficulty — entropy-based UID, surprisal-based UID, and pointwise mutual information — to correctly predict the correct typological distribution, using transitive constructions from 20 languages in the Universal Dependencies project (version 2.5). However, our conclusions are limited by data sparsity.
Highlights
Proceedings of the 24th Conference on Computational Natural Language Learning, pages 245–255 Online, November 19-20, 2020.
8056M6\D137895V8407!780W58H37016829)02XZN8276012 [8822X6458125498cb976!877586228:358 6147218
AE337441168607785F87898)V878 7;S1&2068d778177581Z0 26))5B8081846)368XZ5)8
Summary
Proceedings of the 24th Conference on Computational Natural Language Learning, pages 245–255 Online, November 19-20, 2020. 8056M6\D137895V8407!780W58H37016829)02XZN8276012 [8822X6458125498cb976!877586228:358 6147218
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have