Abstract

Information in speech signals is not evenly distributed, making it an additional challenge for end-to-end (E2E) speech translation (ST) to learn to focus on informative features. In this paper, we propose adaptive feature selection (AFS) for encoder-decoder based E2E ST. We first pre-train an ASR encoder and apply AFS to dynamically estimate the importance of each encoded speech feature to ASR. A ST encoder, stacked on top of the ASR encoder, then receives the filtered features from the (frozen) ASR encoder. We take L0DROP (Zhang et al., 2020) as the backbone for AFS, and adapt it to sparsify speech features with respect to both temporal and feature dimensions. Results on LibriSpeech EnFr and MuST-C benchmarks show that AFS facilitates learning of ST by pruning out ~84% temporal features, yielding an average translation gain of ~1.3-1.6 BLEU and a decoding speedup of ~1.4x. In particular, AFS reduces the performance gap compared to the cascade baseline, and outperforms it on LibriSpeech En-Fr with a BLEU score of 18.56 (without data augmentation).

Highlights

  • End-to-end (E2E) speech translation (ST), a paradigm that directly maps audio to a foreign text, has been gaining popularity recently (Duong et al, 2016; Berard et al, 2016; Bansal et al, 2018; Di Gangi et al, 2019; Wang et al, 2019)

  • Our model narrows the gap against the cascade model to -0.8 average BLEU, where adaptive feature selection (AFS) surpasses Cascade on LibriSpeech En-Fr, without using KD

  • Our used a cascade of separately trained automatic speech recognition (ASR) and machine translation (MT) work focuses on E2E ST, but we investigate feature systems (Ney, 1999)

Read more

Summary

Introduction

End-to-end (E2E) speech translation (ST), a paradigm that directly maps audio to a foreign text, has been gaining popularity recently (Duong et al, 2016; Berard et al, 2016; Bansal et al, 2018; Di Gangi et al, 2019; Wang et al, 2019). Based on the attentional encoder-decoder framework (Bahdanau et al, 2015), it optimizes model parameters under direct translation supervision. This end-toend paradigm avoids the problem of error propagation that is inherent in cascade models where an automatic speech recognition (ASR) model and Amplitude or Hz. 0.4 0.2 0.0 −0.2 −0.4 play is not just child ’s games. Features corresponding to uninformative signals, such as pauses or noise, increase the input length and bring in unmanageable noise for ST This increases the difficulty of learning (Zhang et al, 2019b; Na et al, 2019) and reduces translation performance.

Methods
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call