Abstract

We address the efficiency problem of Collaborative Filtering (CF) in the context of large user and item spaces. A promising solution is to hash users and items with binary codes, and then make recommendation in a Hamming space. However, existing CF hashing methods mainly concentrate on modeling the user-item affinity, yet ignore the user-user and item-item affinities. Such manner results in a large encoding loss and deteriorates the recommendation accuracy subsequently. Towards this end, we propose a Binary Collaborative Filtering Ensemble (BCFE) framework which ensembles three popularly used CF methods to preserve the user-item, user-user and item-item affinities in the Hamming space simultaneously. In order to avoid a time-consuming computation of the user-user and item-item affinity matrices, an anchor approximation solution is employed by BCFE through subspace clustering. Specifically, we devise a Discretization-like Bit-wise Gradient Descend (DBGD) optimization algorithm that incorporates the binary quantization into the learning stage and updates binary codes in a bit-by-bit way. Such a discretization-like algorithm can yield more high-quality binary codes comparing with the popular “two-stage” CF hashing schemes, and is much simpler than the rigorous discrete optimization. Extensive experiments on three real-world datasets show that our BCFE approach significantly outperforms state-of-the-art CF hashing techniques.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.