Abstract
Having numerous potential applications and great impact, end-to-end speech translation (ST) has long been treated as an independent task, failing to fully draw strength from the rapid advances of its sibling - text machine translation (MT). With text and audio inputs represented differently, the modality gap has rendered MT data and its end-to-end models incompatible with their ST counterparts. In observation of this obstacle, we propose to bridge this representation gap with Chimera. By projecting audio and text features to a common semantic representation, Chimera unifies MT and ST tasks and boosts the performance on ST benchmarks, MuST-C and Augmented Librispeech, to a new state-of-the-art. Specifically, Chimera obtains 27.1 BLEU on MuST-C EN-DE, improving the SOTA by a +1.9 BLEU margin. Further experimental analyses demonstrate that the shared semantic space indeed conveys common knowledge between these two tasks and thus paves a new way for augmenting training resources across modalities. Code, data, and resources are available at this https URL.
Highlights
Speech-to-text translation (ST) takes speech input in a source language and outputs text utterance in a target language
Our results show that Chimera achieves new state-of-the-art results on all of 8 translation directions in the benchmark datasets MuST-C and Augmented LibriSpeech
We propose Chimera, a model capable of learning a text-speech shared semantic memory network for bridging the gap between speech and text representations
Summary
Speech-to-text translation (ST) takes speech input in a source language and outputs text utterance in a target language. It has many real-world applications, including automatic video captioning, simultaneous translation for international conferences, etc. Traditional ST approaches cascade automatic speech recognition (ASR) and machine translation (MT) (Sperber et al, 2017, 2019; Zhang et al, 2019; Beck et al, 2019; Cheng et al, 2019). Cascaded models often suffer from the issues of error propagation and translation latency. The end-to-end approaches learn a single unified model, which is easier to deploy, has lower latency and could potentially reduce errors
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.