Abstract

Providing multilingual metadata records for digital objects is a way expanding access to digital cultural collections. Recent advancements in deep learning techniques have made machine translation (MT) more accurate. Therefore, we evaluate the performance of three well-known MT systems (i.e., Google Translate, Microsoft Translator, and DeepL Translator) in translating metadata records of ukiyo-e images from Japanese to English. We evaluate the quality of their translations with an automatic evaluation metric BLEU. The evaluation results show that DeepL Translator is better at translating ukiyo-e metadata records than Google Translate or Microsoft Translator, with Microsoft Translator performing the worst.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.