Abstract

Retrieval of TV talk-show speakers based on solely visual face recognition is hard because of the significant visual variation caused by illumination, pose, size, and expression, which can exceed those due to identity. Fortunately, TV talk-shows often exhibit specific visual production styles and are accompanied with other modalities, such as audio transcript. Hence, this paper presents a speaker retrieval framework which associates the who and when information provided by the audio transcript to a set of visual clusters. First, to obtain the visual clusters, an unsupervised speaker identity clustering strategy is proposed, by which the same speakers are grouped together but without knowing who exactly he/she is. Then, to further identify the specific speaker for each group, we propose an association strategy, by which the search are initially limited to those corresponding to when the queried speaker speaking, followed by a graph-based densest sub-graph refinement. Comprehensive experiments on 3 h French TV talk-show “Le Grand Echiquier” provided by $K$ -space project show satisfactory results. Moreover, evaluation of the proposed association strategy on more challenging MediaEval 2015 task with just the provided speaker diarization module and face tracking module could provide state-of-the-art performances, demonstrating the effect of the proposed association strategy.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call