Abstract

Video retrieval compares multimedia queries to items in a video collection in multiple dimensions and combines all the similarity scores into a final retrieval ranking. Although text is the most reliable feature for video retrieval, features from other modalities can provide complementary information. A reranking framework for video retrieval to augment text feature based retrieval with other evidence is presented. A boosted reranking algorithm called co-retrieval is then introduced, which combines a boosting type learning algorithm and a noisy label prediction scheme to select automatically the most useful (weak) features from multiple modalities. The proposed approach is evaluated with queries and video from the 65 h test collection of the 2003 NIST TRECVID evaluation and it achieves considerable improvement over several baseline retrieval algorithms.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call