Abstract

This paper addresses to assess the relevant visual strength between two videos based on a great deal with image content analysis. After custom pre-trained image and video content using multi-level feature learning model, video features are widely applied to image and video representation. Although, certain features are task-specific, two videos cannot be the best for all types of work. Additionally, for various reasons like ownership, including anonymity, people only have access to predetermined video functions. Refined video features can be reused without returning to the original video information. For example, an affine transformation is accomplished by reimagining a known function into a new space. We proposed to use maximizing the re-learning method for video recommendation. Instead of creating more training data, we suggested a modern data enhancement approach for a frame-by-frame and video-by-video basis task. Extensive testing of our proposed model is considered using real time data set and found the efficacy of the process and lends strong proof to the performance of video recommendation.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call