Abstract

In this paper, we explore the use of adversarial examples for improving a neural network based keyword spotting (KWS) system. Specially, in our system, an effective and small-footprint attention-based neural network model is used. Adversarial example is defined as a misclassified example by a model, but it is only slightly skewed from the original correctly-classified one. In the KWS task, it is a natural idea to regard the false alarmed or false rejected queries as some kind of adversarial examples. In our work, given a well-trained attention-based KWS model, we first generate adversarial examples using the fast gradient sign method (FGSM) and find that these examples can dramatically degrade the KWS performance. Using these adversarial examples as augmented data to retrain the KWS model, we finally achieve 45.6% relative and false reject rate (FRR) reduction at 1.0 false alarm rate (FAR) per hour on a collected dataset from a smart speaker.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call