Abstract

The electronic publication market is growing along with the electronic commerce market. Electronic publishing companies use recommendation systems to increase sales to recommend various services to consumers. However, due to data sparsity, the recommendation systems have low accuracy. Also, previous deep neural collaborative filtering models utilize various variables of datasets such as user information, author information, and book information, and these models have the disadvantage of requiring significant computing resources and training time for their training. To address this issue, we propose a deep neural collaborative filtering model with feature extraction that uses minimal data such as user number, book number, and rating information. The proposed model comprises an input layer for inputting and embedding the product and user data, a feature extraction layer for extracting the features through data correlation analysis between the embedded user and product data, a multilayer perceptron, and an output layer. To improve the performance of the proposed model, Bayesian optimization was used to determine hyperparameters. To evaluate the deep neural collaborative filtering model with feature extraction, a comparative analysis experiment was conducted with currently used collaborative filtering models. The goodbooks-10k public dataset was used, and the results of the experiment show that the low accuracy caused by data sparsity was considerably improved.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.