Abstract

Acoustic event Detection (AED) is concerned with recognition of sound which is produced by the human and the object that is handled by human or by nature. The Detection of acoustic events is an important task for our intelligent system which is supposed to recognize not only speech but also sounds of our indoor and outdoor environments that includes information retrieval, audio-based surveillance and monitoring systems. Currently, System for detection and classification of events from our daily monophonic sound is mature enough to extract features and detect isolated events nearly accurate but accuracy is very low for large dataset and for noisy and overlapped audio events. Mostly the real life sounds are polyphonic and events have some part of overlap which is harder to detect. In our work we will discuss the previous issues for detection and feature extraction of acoustic events. We use the DCASE dataset, published in an international IEEE AASP challenge for Acoustic Event Detection which includes the live recordings which were prepared in an office environment. MFCC is a technique commonly used for features extraction of speech and Acoustic event. We propose to use the Gabor filterbank in addition to MFCCs coefficients to analyze the feature. For Classification we use the Decision tree algorithm that gives better classification and detection result.. Finally, we compare our proposed system with each system that was used for DCASE dataset and concluded that our technique gives best F-Score value in detection of events as compare to others.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call