Abstract

As a key component of e-commerce computing, product representation learning (PRL) has benefitted a wide range of applications, such as product matching, search, and categorization. Nonetheless, existing PRL approaches have poor language understanding ability due to the insufficient ability in capturing contextualized semantics. Also, the learned representations by existing methods lack transferability for new products. Inspired by the recent development of pre-trained language models (PLMs), in this paper, we make the attempt to adapt PLMs for PRL to mitigate the above issues. To this end, we develop KINDLE, a <b>K</b>nowledge-dr<b>I</b>ven pre-traini<b>N</b>g framework for pro<b>D</b>uct representation <b>LE</b>arning, which can preserve the contextual semantics and multi-faceted product knowledge <i>robustly</i> and <i>flexibly</i>. Specifically, we first extend tranditional one-stage pre-training to a two-stage pre-training framework, i.e., language acquisition and knowledge acquisition stage respectively, in which we exploit a deliberate knowledge encoder to ensure a smooth knowledge fusion into PLM without interfering its original function. Then a hierarchical skeleton attention compatible with PLM is introduced to capture the key information of products. In addition, we propose a multi-objective heterogeneous embedding method to represent thousands of knowledge elements. This helps KINDLE&#x00A0;calibrate knowledge noise and sparsity automatically by replacing isolated classes as training targets in knowledge acquisition tasks. Furthermore, an input-aware gating network is proposed to automatically select the most relevant knowledge for different downstream tasks. Finally, extensive experiments have demonstrated the advantages of KINDLE&#x00A0;over the state-of-the-art baselines across three downstream tasks, product matching, personalized product search, and product classification, on both regular and zero-shot settings. As a key component of e-commerce computing, product representation learning (PRL) provides benefits for a variety of applications, including product matching, search, and categorization. The existing PRL approaches have poor language understanding ability due to their inability to capture contextualized semantics. In addition, the learned representations by existing methods are not easily transferable to new products. Inspired by the recent advance of pre-trained language models (PLMs), we make the attempt to adapt PLMs for PRL to mitigate the above issues. In this paper, we develop KINDLE, a <b>K</b>nowledge-dr<b>I</b>ven pre-traini<b>N</b>g framework for pro<b>D</b>uct representation <b>LE</b>arning, which can preserve the contextual semantics and multi-faceted product knowledge <i>robustly</i> and <i>flexibly</i>. Specifically, we first extend traditional one-stage pre-training to a two-stage pre-training framework, and exploit a deliberate knowledge encoder to ensure a smooth knowledge fusion into PLM. In addition, we propose a multi-objective heterogeneous embedding method to represent thousands of knowledge elements. This helps KINDLE calibrate knowledge noise and sparsity automatically by replacing isolated classes as training targets in knowledge acquisition tasks. Furthermore, an input-aware gating network is proposed to select the most relevant knowledge for different downstream tasks. Finally, extensive experiments have demonstrated the advantages of KINDLE over the state-of-the-art baselines across three downstream tasks.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call