Abstract

Objective. To sort out the application and status quo of some domestic crowdsourcing models, explore the factors affecting multilingual manual annotation through experiments and offer suggestions. Methodology. Crawling the government news texts in Mandarin, Cantonese, English, and Portuguese in Guangdong, Hong Kong, and Macao, and enter them into the database. Combine it with corpus tagging uses the established web platform to practice crowdsourcing and collect a large number of annotation results and behavior data. Results. Made assumptions about factors that may affect the quality of manual annotation, used SPSS and other data analysis software to evaluate the degree of interpretation of the assumptions, provided a regression formula for calculating the accuracy, and provided constructive suggestions for the corpus annotation quality assurance projects. Limitations. More corpus information in more languages and more professional annotators are needed. Conclusions. The study found that the accuracy of annotation is strongly related to the attributes of the corpus itself, such as the total number of vocabularies, the number of rare words, the complexity of parts of speech, etc., and the condition of languages are different from each other.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.