Abstract
Graph neural networks (GNN) are an emerging framework in the deep learning community. In most GNN applications, the graph topology of data samples is provided in the dataset. Specifically, the graph shift operator (GSO), which could be adjacency, graph Laplacian, or their normalizations, is known a priori. However we often have no knowledge of the grand-truth graph topology underlying real-world datasets. One example of this is to extract subject-invariant features from physiological electroencephalogram (EEG) to predict a cognitive task. Previous methods use electrode sites to represent a node in the graph and connect them in various ways to hand-engineer a GSO e.g., i) each pair of electrode sites is connected to form a complete graph, ii) a specific number of electrode sites are connected to form a k-nearest neighbor graph, iii) each pair of electrode site is connected only if the Euclidean distance is within a heuristic threshold. In this paper, we overcome this limitation by parameterizing the GSO using a multi-head attention mechanism to explore the functional neural connectivity subject to a cognitive task between different electrode sites, and simultaneously learn the unsupervised graph topology in conjunction with the parameters of graph convolutional kernels.
Talk to us
Join us for a 30 min session where you can share your feedback and ask us any queries you have
More From: Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference
Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.