Abstract

End-to-end attention-based modeling is increasingly popular for tackling sequence-to-sequence mapping tasks. Traditional attention mechanisms utilize prior input information to derive attention, which then conditions the output. However, we believe that knowledge of posterior output information may convey some advantage when modeling attention. A recent technique proposed for machine translation called the posterior attention model (PAM) demonstrates that posterior output information can be used in that way for machine translation. This paper explores the use of posterior information for attention modeling in an automatic speech recognition (ASR) task. We demonstrate that direct application of PAM to ASR is unsatisfactory, due to two deficiencies; Firstly, PAM adopts attention based weighted single-frame output prediction by assuming a single focused attention variable, whereas wider contextual information from acoustic frames is important for output prediction in ASR. Secondly, in addition to the well-known exposure bias problem, PAM introduces additional mismatches in attention training and inference calculations. We present extensive experiments combining a number of alternative approaches to solving these problems, leading to a high performance technique which we call extended PAM (EPAM). To counter the first deficiency, EPAM modifies the encoder to introduce additional context information for output prediction. The second deficiency is overcome in EPAM through a two part solution of a mismatch penalty term and an alternate learning strategy. The former applies a divergence-based loss to correct the mismatch bias distribution, while the latter employs a novel update strategy which relies on introducing iterative inference steps alongside each training step. In experiments with both WSJ-80hrs and Switchboard-300hrs datasets we found significant performance gains. For example, the full EPAM system model achieved a word error rate (WER) of 10.6% on the WSJ eval92 test set, compared to 11.6% for traditional prior-attention modeling. Meanwhile, on the Switchboard eval2000 test set, we achieved 16.3% WER compared to the traditional method WER of 17.3%.

Highlights

  • Automatic Speech Recognition (ASR) has achieved tremendous progress over recent years, gradually evolving from the original hybrid architecture [1]–[3] to end-to-end systems and models [4]–[7].Among the latter, attention-based sequence-to-sequence models have achieved significant improvement overThe associate editor coordinating the review of this manuscript and approving it for publication was Paolo Napoletano .conventional ASR systems [8], [9] and are a popular research direction

  • Attention mechanisms have shown their worth in improving ASR performance over the past several years, by endowing the classifier with an effective ability to focus on specific regions of the input features

  • This paper explores the hypothesis that posterior information, from the encoder output itself, could be valuable when forming an attention mechanism in encoder-decoder ASR networks – in other words that attention may be improved through knowledge of the output as well as the input

Read more

Summary

Introduction

Automatic Speech Recognition (ASR) has achieved tremendous progress over recent years, gradually evolving from the original hybrid architecture [1]–[3] to end-to-end systems and models [4]–[7].Among the latter, attention-based sequence-to-sequence models have achieved significant improvement overThe associate editor coordinating the review of this manuscript and approving it for publication was Paolo Napoletano .conventional ASR systems [8], [9] and are a popular research direction. Automatic Speech Recognition (ASR) has achieved tremendous progress over recent years, gradually evolving from the original hybrid architecture [1]–[3] to end-to-end systems and models [4]–[7]. Among the latter, attention-based sequence-to-sequence models have achieved significant improvement over. In attention-based sequence-to-sequence (seq2seq) models, the chain rule is usually used to decompose the sequence mapping problem into a multiplication of recursive elements. The advantage of attention models is that they can directly learn a mapping from speech to text, which enables joint optimization of acoustic and language models [10], [11].

Objectives
Results
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call