Abstract

In small-footprint end-to-end keyword spotting, it is often expensive and time-consuming to acquire sufficient labels in various speech scenarios. To overcome this problem, transfer learning leverages the rich knowledge of the auxiliary domain to annotate the unlabeled target data. However, most existing transfer learning methods typically learn a domain-invariant feature representation while ignoring the negative transfer problem. In this paper, we propose a new and general cross-domain keyword spotting framework called selective transfer subspace learning (STSL) that avoid negative transfer and dramatically improve the accuracy for cross-domain keyword spotting by actively selecting appropriate source samples. Specifically, STSL first aligns geometrical relationship and weighted distribution discrepancy to learn a domain-invariant projection subspace. Then, it actively selects appropriate source samples that are similar to the target domain for transfer learning to avoid negative transfer. Finally, we formulate a minimization problem that alternately optimizes the projection subspace and source active selection, giving an effective optimization. Experimental results on 10 groups of cross-domain keyword spotting tasks show that our STSL outperforms some state-of-the-art transfer learning methods and no transfer learning methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call