Abstract

In modern society where connections among nations have been more and more frequent, it remains important to realize automatic language conversion methods for the public. Currently, most the existing research works were conducted upon the basis of semantics analysis. But from the perspective of linguistics, the vision characteristics is also a kind of concomitant existence. To deal with such challenge, this paper proposes a vision and semantics-jointly driven hybrid intelligence method for automatic pairwise language conversion. The whole technical framework can be divided into two components: vision sensing part and semantics sensing part. For the former, the virtual reality is introduced for use to capture the visual feature representation for language contents. For the latter, the recurrent neural network model is utilized to capture semantic feature representation for language texts. They are then integrated into a jointly driving framework, so as to improve the conversion efficiency. Taking two dialects (Sichuan dialect and Chongqing dialect) in China as the example, the simulative experiments are conducted on massive real-world training corpus to evaluate the proposal. The results can reflect feasibility of it.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call