Abstract

Continuous Speech Separation (CSS) has been proposed to address speech overlaps during the analysis of realistic meeting-like conversations by eliminating any overlaps before further processing. CSS separates a recording of arbitrarily many speakers into a small number of overlap-free output channels, where each output channel may contain speech of multiple speakers. Often, a separation model is trained with Utterance-level Permutation Invariant Training (uPIT), which exclusively maps a speaker to an output channel, and applied in a sliding window approach called stitching. Recently, we introduced an alternative training scheme called Graph-PIT that teaches the separator to produce a speaker-shared output channel format without stitching. It can handle an arbitrary number of speakers as long as the number of overlapping speakers is never larger than the number of output channels. Models trained in this way are able to perform segment-less CSS, i.e., without stitching, and achieve comparable and often better separation quality than the conventional CSS with uPIT and stitching. In this contribution, we further investigate the Graph-PIT training scheme. We show in extended experiments that Graph-PIT also works in challenging reverberant conditions. We simplify the training schedule for Graph-PIT with the recently proposed Source Aggregated Signal-to-Distortion Ratio (SA-SDR) loss, which eliminates unfavorable properties of the previously used A-SDR loss to enable training with Graph-PIT from scratch. Furthermore, we introduce novel signal-level evaluation metrics for meeting scenarios, namely the source-aggregated scale- and convolution-invariant Signal-to-Distortion Ratio (SA-SI-SDR and SA-CI-SDR), which are generalizations of the commonly used SDR-based metrics for the CSS case.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call