Abstract

Reliably annotated corpora with reliable annotation are a valuable resource for Natural Language Processing, which justifies the search for methods capable of assisting linguistic revision. In this context, we present a study on methods for revising dependency treebanks, investigating the contribution of three different strategies to the corpus review: (i) linguistic rules; (ii) an adaptation of the n-grams method proposed by Boyd et al. (2008) applied to Portuguese; and (iii) Inter-Annotator Disagreement, a linguistically motivated approach that draws inspiration from the human annotation process. The results are promising, and taken together the three methods can lead to the revision of up to 58% of the errors in a specific corpus at the cost of revising only 20% of the corpus. We also present a tool that integrates treebank editing, evaluation and search capabilities with the review methods, as well as a gold-standard Portuguese corpus from the oil and gas domain.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call