Abstract

Existing GNNs usually conduct the layer-wise message propagation via the 'full' aggregation of all neighborhood information which are usually sensitive to the structural noises existed in the graphs, such as incorrect or undesired redundant edge connections. To overcome this issue, we propose to exploit Sparse Representation (SR) theory into GNNs and propose Graph Sparse Neural Networks (GSNNs) which conduct sparse aggregation to select reliable neighbors for message aggregation. GSNNs problem contains discrete/sparse constraint which is difficult to be optimized. Thus, we then develop a tight continuous relaxation model Exclusive Group Lasso GNNs (EGLassoGNNs) for GSNNs. An effective algorithm is derived to optimize the proposed EGLassoGNNs model. Experimental results on several benchmark datasets demonstrate the better performance and robustness of the proposed EGLassoGNNs model.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.