Abstract
Audio diarization is the process of partitioning an input audio stream into homogeneous regions according to their specific audio sources. These sources can include audio type (speech, music, background noise, ect.), speaker identity and channel characteristics. With the continually increasing number of larges volumes of spoken documents including broadcasts, voice mails, meetings and telephone conversations, diarization has received a great deal of interest in recent years which significantly impacts performances of automatic speech recognition and audio indexing systems. A subtype of audio diarization, where the speech segments of the signal are broken into different speakers, is speaker diarization. It generally answers to the question Who spoke when? and it is divided in two modules: speaker segmentation and speaker clustering. This chapter discusses the problem of automatically detecting speaker change points presented in a given audio stream, without prior acoustic information on the speakers. We introduce a new unsupervised speaker segmentation technique based on One Class Support Vector Machines (1-SVMs) robust to different acoustic conditions. We evaluated the robustness improvements of this method by segmenting different types of audio stream (broadcast news, meetings and telephone conversations) and comparing the results with model selection segmentation techniques based on the Bayesian information criterion (BIC).
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.