Abstract
When we think of audio data, we think of music and speech. However, the set of various kinds of audio data, contains a vast multitude of different sounds. Human brain can identify sounds such as two vehicles crashing against each other, someone crying or a bomb explosion. When we hear such sounds, we can identify the source of the sound and the event that caused them. We can build artificial systems which can detect acoustic events just like humans. Acoustic event detection (AED) is a technology which is used to detect acoustic events. Not only can we detect the acoustic event but also, determine the time duration and the exact time of occurrence of any event. This paper aims to make use of convolutional neural networks in classifying environmental sounds which are linked to certain acoustic events. Classification and detection of acoustic events has numerous real-world applications such as anomaly detection in industrial instruments and machinery, smart home systems, security applications, tagging audio data and in creating systems to aid the hearing-impaired individuals. While environmental sounds can encompass a large variety of sounds, we will focus specifically on certain urban sounds in our study and make use of convolutional neural networks (CNNs) which have traditionally been used to classify image data, for our analysis on audio data. The model, when given a sample audio file must be able to assign a classification label and a corresponding accuracy score.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.