Abstract

Cross-lingual transfer is an important technique for low-resource language processing. Temporarily, most research on syntactic parsing works on the dependency structures. This work investigates cross-lingual parsing on another type of important syntactic structure, i.e., the constituency structure. We propose a delexicalized approach, where part-of-speech sequences of rich-resource languages are used to train cross-lingual models to parse low-resource languages. We also investigate the measurements on the selection of proper rich-resource languages for specific low-resource languages. The experiments show that the delexicalized approach outperforms state-of-the-art unsupervised models on six languages by a margin of 4.2 to 37.0 of sentence-level F1-score. Based on the experiment results, the limitation and future work of the delexicalized approach are discussed.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call