Abstract

Deaf people communicate naturally through visual-gesture languages, called sign languages (LS). As a result, they have a great deal of difficulty absorbing oral content, either in written or spoken form, even in the oral language (LO) of their native country. In addition, if it is already difficult to a deaf to access information in the oral language of your country, obstacles to accessing information in foreign languages become almost insurmountable, reducing the level of access to information. Among the approaches to the problem, one of the most promising ones involves the use of automatic translators to translate written or spoken content into sign language through an avatar. However, the vast majority of these machine translation platforms are focused on translating a single oral language into a single associated sign language. In order to expand the range of oral languages that Brazilian deaf people could have access to, this article investigates the use of text-to-text machine translation mechanisms before the text-to-gloss machine translation. The idea is to evaluate the offer of a service for automatic translation of digital content (text, audio or video, for example) in any oral language for the Brazilian Language of Signals (LIBRAS). As a way of validating the proposal a prototype based on the Suite VLibras was constructed. A series of computational and user evaluations were carried out to verify if the proposed flow of chained translations allows the suitable understanding of the content in foreign languages.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call