Abstract

Some have argued that word orders which are more difficult to process should be rarer cross-linguistically. Our current study fails to replicate the results of Maurits, Navarro, and Perfors (2010), who used an entropy-based Uniform Information Density (UID) measure to moderately predict the Greenbergian typology of transitive word orders. We additionally report an inability of three measures of processing difficulty — entropy-based UID, surprisal-based UID, and pointwise mutual information — to correctly predict the correct typological distribution, using transitive constructions from 20 languages in the Universal Dependencies project (version 2.5). However, our conclusions are limited by data sparsity.

Highlights

  • Proceedings of the 24th Conference on Computational Natural Language Learning, pages 245–255 Online, November 19-20, 2020.

  • 805 6M6 \ D13 7895V8407 !780W 58H37016829)02XZ N 8276012 [8822X6458125498cb976!8775 862 28:3 5 8 6147218

  • AE33744 1168607785F87898)V8 78 7;S1&2068d778177581Z0 26))5B8081846)368XZ5) 8

Read more

Summary

Introduction

Proceedings of the 24th Conference on Computational Natural Language Learning, pages 245–255 Online, November 19-20, 2020. 805 6M6 \ D13 7895V8407 !780W 58H37016829)02XZ N 8276012 [8822X6458125498cb976!8775 862 28:3 5 8 6147218

Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call