A well-developed recommendation system can not only leverage multi-typed interactions (such as page view, add-to-cart, and purchase) to better identify user preferences, but also demonstrate high performance, low complexity, and strong interpretability. However, many existing solutions for multi-behavior recommendation fall short of intuitive modeling of real-world scenarios, leading to overly complex models with massive parameters and cumbersome components. In particular, they share two critical limitations: (1) Some pioneering models are built upon the strict assumption of cascade effects across behaviors, which contradicts multifarious behavior paths in practical applications. (2) Existing approaches fail to explicitly capture the unique idiosyncrasies of users and even neglect the inherent nature of items involved in the multi-behavior interactions. To this end, we propose a novel Directed Acyclic Graph Convolutional Network (DA-GCN) for the multi-behavior recommendation task. Specifically, we pinpoint the partial order relations within the monotonic behavior chain and extend it to personalized directed acyclic behavior graphs to exploit behavior dependencies. Then, a GCN-based directed edge encoder is employed to distill rich collaborative signals embodied by each directed edge. In light of the information flows over the directed acyclic structure, we propose an attentive aggregation module to gather messages from all potential antecedent behaviors, representing distinct perspectives to understand the terminated behavior. Thus, we obtain comprehensive representations for the follow-up behavior through learnable distributions over its preceding behaviors, explicitly reflecting personalized interactive patterns of users and underlying properties of items simultaneously. Finally, we design a customized multi-task learning objective for flexible joint optimization. Extensive experiments on public benchmarking datasets fully demonstrate the superiority of DA-GCN with significant performance improvement and computational efficiency over a wide range of state-of-the-art methods. Our code is available at https://github.com/xizhu1022/DA-GCN .
Read full abstract