Search result personalization and re-ranking in professional domains present significant challenges due to the complexity of domain-specific terminology and varying user expertise levels. This paper proposes a novel framework integrating Large Language Models (LLMs) with personalized search re-ranking for professional domains. The framework incorporates four key components: LLM-based user profile construction, professional domain knowledge encoding, cross-encoder re-ranking, and dynamic weight allocation. The user profile construction module utilizes historical interactions and professional behaviors to generate comprehensive user representations, while the domain knowledge encoding module captures specialized terminology and relationships. A cross-encoder architecture performs deep semantic matching between queries and documents, with results optimized through a dynamic weight allocation strategy. Experimental evaluation on three professional datasets (MedSearch, LegalDoc, and TechQuery) demonstrates significant improvements over existing methods, achieving 15.2% higher nDCG@10 and 12.8% better MRR compared to traditional ranking approaches. The framework maintains stable performance under varying query loads while effectively handling domain-specific terminology and user expertise variations. Ablation studies reveal the substantial impact of each component, with the LLM-based user profile construction and domain knowledge encoding contributing 7.5% and 6.3% improvements respectively. The proposed approach establishes new benchmarks for professional search systems while providing insights into effective integration of LLMs in domain-specific information retrieval.
Read full abstract