Abstract
The movie domain is one of the most common scenarios to test and evaluate recommender systems. These systems are often implemented through a collaborative filtering model, which relies exclusively on the user's feedback on items, ignoring content features. Content-based filtering models are nevertheless a potentially good strategy for recommendation, even though identifying relevant semantic representation of items is not a trivial task. Several techniques have been employed to continuously improve the content representation of items in content-based recommender systems, including low-level and high-level features, text analysis, and social tags. Recent advances on deep learning, particularly on convolutional neural networks, are paving the way for better representations to be extracted from unstructured data. In this work, our main goal is to understand whether these networks can extract sufficient semantic representation from items so we can better recommend movies in content-based recommender systems. For that, we propose DeepRecVis, a novel approach that represents items through features extracted from keyframes of the movie trailers, leveraging these features in a content-based recommender system. Experiments shows that our proposed approach outperforms systems that are based on low-level feature representations.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.