Abstract

The increasing demands of high resolution and high quality videos aggravate the burden of limited cluster storage and restricted bandwidth resources. Hence, video de-duplication in storage and transmission is becoming an important feature for video cloud storage and Content Delivery Network (CDN) service providers. However, the current video de-duplication schemes mostly rely on the URL-based solution, which is unable to deal with non-cacheable content like video. The same video content with various resolutions and qualities may have completely different URL identification. In this paper, we propose a novel content-based video segmentation identification scheme that is invariant to the underlying codec and operational bit rates. It computes robust features from a triplet loss deep learning network that captures the invariance of the same content under different coding tools and strategy. In addition, a scalable hashing solution is developed based on Fisher Vector aggregation of the convolutional features from the Triplet loss network. Furthermore, we apply binary tree to obtain the triplets to improve the performance of the triplet-loss based VGG network. Our simulation results show significant improvements in terms of large scale video repository de-duplication compared with the state-of-the-art method.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.