Abstract

Ever-increasing amounts of publicly available multimedia associated with speech information have motivated spoken document retrieval (SDR) to be an active area of intensive research in the speech processing community. Much work has been dedicated to developing elaborate indexing and modeling techniques for representing spoken documents, but only little to improving query formulations for better representing the information needs of users. The latter is critical to the success of a SDR system. In view of this, we present in this paper a novel use of a relevance language modeling framework for SDR. It not only inherits the merits of several existing techniques but also provides a principled way to render the lexical and topical relationships between a query and a spoken document. We further explore various ways to glean both relevance and non-relevance cues from the spoken document collection so as to enhance query modeling in an unsupervised fashion. In addition, we also investigate representing the query and documents with different granularities of index features to work in conjunction with the various relevance and/or non-relevance cues. Empirical evaluations performed on the TDT (Topic Detection and Tracking) collections reveal that the methods derived from our modeling framework hold good promise for SDR and are very competitive with existing retrieval methods.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call