Abstract

Environmental audio tagging aims to predict only the presence or absence of certain acoustic events in the interested acoustic scene. In this paper we make contributions to audio tagging in two parts, respectively, acoustic modeling and feature learning. We propose to use a shrinking deep neural network (DNN) framework incorporating unsupervised feature learning to handle the multi-label classification task. For the acoustic modeling, a large set of contextual frames of the chunk are fed into the DNN to perform a multi-label classification for the expected tags, considering that only chunk (or utterance) level rather than frame-level labels are available. Dropout and background noise aware training are also adopted to improve the generalization capability of the DNNs. For the unsupervised feature learning, we propose to use a symmetric or asymmetric deep de-noising auto-encoder (sDAE or aDAE) to generate new data-driven features from the Mel-Filter Banks (MFBs) features. The new features, which are smoothed against background noise and more compact with contextual information, can further improve the performance of the DNN baseline. Compared with the standard Gaussian Mixture Model (GMM) baseline of the DCASE 2016 audio tagging challenge, our proposed method obtains a significant equal error rate (EER) reduction from 0.21 to 0.13 on the development set. The proposed aDAE system can get a relative 6.7% EER reduction compared with the strong DNN baseline on the development set. Finally, the results also show that our approach obtains the state-of-the-art performance with 0.15 EER on the evaluation set of the DCASE 2016 audio tagging task while EER of the first prize of this challenge is 0.17.

Highlights

  • A S SMART mobile devices are widely used in recent years, huge amounts of multimedia recordings are generatedManuscript received July 12, 2016; revised November 17, 2016; accepted February 24, 2017

  • It is clear that the proposed deep neural network (DNN)-based approaches outperform the Support Vector Machine (SVM) and Gaussian mixture model (GMM) baselines across the five-fold evaluations

  • GMM is better than the SVM methods

Read more

Summary

INTRODUCTION

A S SMART mobile devices are widely used in recent years, huge amounts of multimedia recordings are generated. XU et al.: UNSUPERVISED FEATURE LEARNING BASED ON DEEP MODELS FOR ENVIRONMENTAL AUDIO TAGGING. Have been widely used for environmental audio tagging [18], [19], a newly proposed task in DCASE 2016 challenge [11] based on the CHiME-home dataset [20] It is still not clear what would be appropriate input features, objective functions and the model structures for deep learning based audio tagging. We propose a robust deep learning framework for the audio tagging task, with focuses mainly on the following two parts, acoustic modeling and unsupervised feature learning, respectively. We propose a symmetric or asymmetric deep de-noising auto-encoder (syDAE or asyDAE) based unsupervised method to generate a new feature from the basic features.

ROBUST DNN-BASED AUDIO TAGGING
DNN-Based Multi-Label Classification
Dropout for the Over-Fitting Problem
Background Noise Aware Training
Alternative Input Features for Audio Tagging
PROPOSED DEEP ASYMMETRIC DAE
DCASE2016 Data Set for Audio Tagging
Experimental Setup
Compared Methods
Overall Evaluations
Evaluation Set
Evaluation set b p v
Evaluations for the Size of the Training Dataset
Audio Tagging Using Gaussian Mixture Models
Audio Tagging Using Multiple Instance SVM
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call