Abstract

In mobile scenarios, there is a need for general user representations to solve multiple target tasks. However, there are some challenges in the related research (e.g., difficulty in learning a representation that satisfies both great generalization and performance). To address these problems, we proposed a network for downstream-adaptable mobile user modeling, which employed a novel fine-tuning strategy for optimizing the performance of several downstream tasks. Additionally, we designed a time-difference module to eliminate the impact of low-frequency and non-uniform app usage behavior over time. A parallel decoder structure was developed to incorporate multi-type features by minimizing information loss. We evaluated our method on a real-world dataset of 100,000 mobile users and three downstream tasks (i.e., age prediction, gender prediction, and app recommendation). The experimental results showed that our method could outperform existing methods significantly. It achieved 96.5% ACC on gender prediction, 68.1% ACC on age prediction, and 64.2% Recall@5 on app recommendation. These results imply that our method performs well on both generalization and performance. It could be anticipated promising to the unseen tasks inference.

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.