Abstract

AbstractAnomaly detection on attributed graphs is a crucial topic for practical applications. Existing methods suffer from semantic mixture and imbalance issue because they commonly optimize the model based on the loss function for anomaly discrimination, mainly focusing on anomaly discrimination and ignoring representation learning. Graph Neural networks based techniques usually tend to map adjacent nodes into close semantic space. However, anomalous nodes commonly connect with numerous normal nodes directly, conflicting with the assortativity assumption. Additionally, there are far fewer anomalous nodes than normal nodes, leading to the imbalance problem. To address these challenges, a unique algorithm, decoupled self-supervised learning for anomaly detection (DSLAD), is proposed in this paper. DSLAD is a self-supervised method with anomaly discrimination and representation learning decoupled for anomaly detection. DSLAD employs bilinear pooling and masked autoencoder as the anomaly discriminators. By decoupling anomaly discrimination and representation learning, a balanced feature space is constructed, in which nodes are more semantically discriminative, as well as imbalance issue can be resolved. Experiments conducted on various six benchmark datasets reveal the effectiveness of DSLAD.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call