Abstract

Audio classification acts as the fundamental step for lots of applications like content based audio retrieval and audio indexing. In this work, we have presented a novel scheme for classifying audio signal into three categories namely, speech, music without voice (instrumental) and music with voice (song). A hierarchical approach has been adopted to classify the signals. At the first stage, signals are categorized as speech and music using audio texture derived from simple features like ZCR and STE. Proposed audio texture captures contextual information and summarizes the frame level features. At the second stage, music is further classified as instrumental/song based on Mel frequency cepstral co-efficient (MFCC). A classifier based on Random Sample and Consensus (RANSAC), capable of handling wide variety of data has been utilized. Experimental result indicates the effectiveness of the proposed scheme.Electronic supplementary materialThe online version of this article (doi:10.1186/2193-1801-2-526) contains supplementary material, which is available to authorized users.

Highlights

  • With the rapid growth in multimedia technology, it has become quite easy to possess an audio library of huge volume

  • 4 Conclusion In this work, we have presented a hierarchical scheme for classifying audio signals into three categories namely speech, music without voice, music with voice

  • Audio texture that has been derived based on zero crossing rate (ZCR) and short time energy (STE) co-occurrence matrices can successfully discriminate speech and music

Read more

Summary

Introduction

With the rapid growth in multimedia technology, it has become quite easy to possess an audio library of huge volume. Taking ZCR and STE co-occurrence matrices based features together, a 10-dimensional feature vector is formed and it acts the descriptor for an audio signal for speech/music classification. At first stage RANSAC models the signals as speech or music considering audio texture as the feature and classification is carried out. Each element in the feature vector is normalized with in the range [0, 1] It has motivated us to go for hierarchical approach As it has been discussed, proposed audio texture based on ZCR and STE can clearly discriminate speech and music signal, we have taken up the issue of speech-music classification at the first stage. 16-dimensional feature vector using FFT based perceptual features and MFCC has been considered

Methodology
Findings
Conclusion
Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call