Abstract

<p>With the rapid growth of the number and type of mobile applications, it becomes challenging to accurately classify and recommend mobile applications according to users’ individual requirements. The existing mobile application classification and recommendation methods, for one thing, do not take into account the correlation between large-scale data and model. For another, they also do not fully exploit the multi-modal, fine-grained interaction features with high-order and low-order in mobile application. To tackle this problem, we propose a mobile application classification and recommendation method based on multi-modal feature fusion. The method firstly extracts the image and description features of the mobile application using an “involution residual network + pre-trained language representation” model (i.e. the TRedBert model). Afterwards, these features are fused by using the attention mechanism in the transformer model. Then, the method classifies the mobile applications based on the fused features through a softmax classifier. Finally, the method extracts the high-order and low-order embedding features of the mobile app with a bi-linear feature interaction model (FiBiNET) based on the classification results of the mobile app, by combining the Hadamard product and inner product to achieve fine-grained high-order and low-order feature interaction, to update the mobile app representation and complete the recommendation task. The multiple sets of comparison experiments are performed on Kaggle’s real dataset, i.e., 365K IOS Apps Dataset. And the experimental results demonstrated that the proposed approach outperforms other methods in terms of Macro F1, Accuracy, AUC and Logloss.</p> <p> </p>

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call