Abstract

Few shot learning (FSL) improves the generalization of neural network classifiers to unseen classes and tasks using small annotated samples of data. Recently, there have been attempts to apply few shot learning in the audio domain for various applications. However, the focus has been mainly on accuracy. Here, we take a holistic view and investigate various system aspects such as latency, storage and memory requirements of few shot learning methods in addition to improving the accuracy with very deep learning models for the tasks of audio classification. To this end, we not only compare the performance of different few shot learning methods but also, for the first time, design an end-to-end framework for smartphones and wearables which can run such methods completely on-device. Our results indicate the need to collect large datasets with more classes as we show much higher gains can be obtained with very deep learning models on big datasets. Surprisingly, metric-based methods such as ProtoTypical Networks can be realized practically on-device and quantization helps further (50%) in reducing the resource requirements, while having no impact on accuracy for the audio classification tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call