Abstract

The transfer of acoustic data across languages has been shown to improve keyword search KWS performance in data-scarce settings. In this paper, we propose a way of performing this transfer that reduces the impact of the prevalence of out-of-vocabulary OOV terms on KWS in such a setting. We investigate a novel usage of multilingual features for KWS with very little training data in the target languages. The crux of our approach is the use of synthetic phone exemplars to convert the search into a query-by-example task, which we solve with the dynamic time warping algorithm. Using bottleneck features obtained from a network trained multilingually on a set of source languages, we train an extended distance metric learner EDML for four target languages from the IARPA Babel program which are distinct from the source languages. Compared with a baseline system that is based on automatic speech recognition ASR with a multilingual acoustic model, we observe an average term weighted value improvement of ${0.0603}$ absolute $\text{74}\%$ relative in a setting with only 1 h of training data in the target language. When the data scarcity is relaxed to 10 h, we find that phone posteriors obtained by fine-tuning the multilingual network give better EDML systems. In this relaxed setting, the EDML systems still perform better than the baseline on OOV terms. Given their complementary natures, combining the EDML and the ASR-based baseline results in even further performance improvements in all settings.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call