Abstract

Graph Convolutional Networks (GCNs) has shown promise in recommendation systems. However, a critical issue known as the over-smoothing problem has been identified in GCN models. This problem arises as the number of layers in the model increases, leading to a decrease in model performance due to node representations becoming excessively similar. Various strategies have been proposed to combat this issue, like preserving node-specific information or introducing high-order neighbors. Unfortunately, these approaches overlook a crucial factor known as Visual Overlap Strength (VOS), which plays a significant role in the over-smoothing problem. VOS measures the extent of neighborhood overlap between nodes in the graph. To address the over-smoothing problem, this paper offers a comprehensive analysis of the issue and introduces the concept of VOS between nodes. We present a novel solution in the form of the Dynamic Adaptive Multi-view fusion GCN model with dilAted maSK convolution mechanism (DAMASK-GCN). This model dynamically adjusts the perceptual field of nodes, captures high-order information, and alleviates the impact of VOS through the utilization of dilated mask convolution and an adaptive attention fusion mechanism. Extensive experiments on six popular recommendation system datasets demonstrate DAMASK-GCN's effectiveness in reducing over-smoothing and outperforming existing GCN-based recommendation models.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call