Abstract

The availability of data is the driving force behind most of the state-of-the-art techniques for machine translation tasks. Understandably, this availability of data motivates researchers to propose new techniques and claim about the superiority of their techniques over the existing ones by using suitable evaluation measures. However, the performance of underlying learning algorithms can be greatly influenced by the correctness and the consistency of the corpus. We present our investigations for the relevance of a publicly available python to pseudo-code parallel corpus for automated documentation task, and the studies performed using this corpus. We found that the corpus had many visible issues like overlapping of instances, inconsistency in translation styles, incompleteness, and misspelled words. We show that these discrepancies can significantly influence the performance of the learning algorithms to the extent that they could have caused previous studies to draw incorrect conclusions. We performed our experimental study using statistical machine translation and neural machine translation models. We have recorded a significant difference (sim 10% on BLEU score) in the models’ performance after removing the issues from the corpus.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.