Getting NBA Shots in Context: Analysing Basketball Shots with Graph Embeddings
Abstract Evaluating the quality of shots in basketball is crucial and requires considering the context in which they are taken. We introduce a graph neural network to process a graph based on player and ball tracking data to compute expected shot quality. We evaluate this model against other models focusing on calibration. The messages between spatial and temporal features are separated, and an attention mechanism is implemented, making the graph neural network interpretable. We use the GNNExplainer to further show the importance of node features. To demonstrate possible practical applications, we analyse the embeddings of the graph neural network concerning different situations like the mean of all player predictions or similarity between created shots and compare this to existing methods.
- Conference Article
1
- 10.23919/apnoms56106.2022.9919973
- Sep 28, 2022
Internet of Vehicles (IoV) is an emerging archetype that is a distributed network of various vehicles armed with sensors, actuators, technologies, and applications to connect and exchange data with each other over the Internet. The primary goal of IoV is to provide a vehicular platform to enable better communication and Quality of Service (QoS) for vehicles, pedestrians, and roadside infrastructure in real-time, through the use of Vehicle-to-Vehicle (V2V), Vehicle-to-Pedestrian (V2P), Vehicle-to-Infrastructure (V2I), Vehicle-to-Network (V2N), and Vehicle-to-Cloud (V2C) channels. However, the increasing number of vehicular services poses serious concerns and challenges to the researchers: real-time traffic forecasting, service placement, security, reliability, and routing. The primary focus of this work is concerned with the challenge of multi-regional forecasting of multi-class traffic. These traffic forecasting models enable pro-activeness in systems by providing real-time and accurate predictions. Also, they can explore traffic densities over spatial and temporal domains for various vehicle types. However, current literature cannot provide forecasts for multi-region and multi-class vehicles at the same time. This study aims at enabling a proactive platform which could make decisions based on the integrated Graph Neural Network (GNN) and Gated Recurrent Unit (GRU) based traffic forecasting model, i.e., Spatio-Temporal GNN (STGNN) based traffic forecasting model. The STGNN-based traffic forecasting model uses the GNN and GRU models to explore spatial and temporal features of varying vehicular multi-class traffic densities. In GNN, spatial data consisting of multi-class traffic densities are utilized for the feature extraction that results in graph embeddings. In the GRU, these graph embeddings are utilized for temporal feature extraction. This approach enables the forecasting of multi-class vehicular traffic densities and the pro-activeness of an IoV platform. In addition, the performance results show that an intelligent platform can be built upon the proposed traffic forecasting model that is capable of inspecting complex and nonlinear traffic accurately.
- Conference Article
2
- 10.1109/bigdata50022.2020.9378189
- Dec 10, 2020
in recent years, graph neural network has been widely used. Attention mechanism is introduced into the graph neural network to make it more applicable. Both GAT and AGNN prove that attention mechanism plays an important role in graph neural network. Attention mechanism algorithms such as gat and AGNN directly use a self-learning variable to do the point product after calculating the connection (or similarity calculation) of node and neighbor features (without further processing of the calculation results). Finally, we get an aggregation of neighbor information. A cosine similarity distance pruning algorithm based on graph attention mechanism (CDP-GA) is proposed to optimize the attention matrix of nodes and their adjacent nodes. By calculating the cosine similarity between node features and neighbor features (the feature here is obtained by linear transformation), the similarity of nodes is regarded as the distance between nodes (or the weight of edges). And we think that the aggregation degree of node information is inversely proportional to the distance between nodes (similar to the heat conduction formula). In the method, we prune the neighborhood of the node according to the cosine similarity to get the final attention coefficient matrix. In this way, the attention mechanism in the graph neural network is further refined, and the loss of aggregation neighbor information is reduced. In the experiments of three datasets, our model is compared with the experimental classification of GAT and AGNN and the experiment of correlation graph neural network algorithm. Finally, it is proved that the algorithm is better than three known datasets.
- Research Article
28
- 10.1109/jbhi.2022.3195066
- Oct 1, 2022
- IEEE Journal of Biomedical and Health Informatics
In recent years, depression has become an increasingly serious problem globally. Previous studies of automatic depression recognition based on functional near-Infrared spectroscopy (fNIRS) or other brain imaging techniques have shown potential to serve as auxiliary diagnosis methods that provide assistance to medical professionals. Recently, some studies have found that, besides directly using the data themselves (temporal data), the use of functional connectivity among channels (spatial data) also can be effective. In this paper, we propose a method based on Graph Neural Network (GNN) that combines both temporal and spatial features of fNIRS data for automatic depression recognition. Specifically, fNIRS data of 96 subjects were collected and pre-processed. Basic statistical metrics of each channel were extracted as temporal features, and channel connectivity (coherence and correlation) were calculated as spatial features. Point-biserial analysis was conducted on these features and depression labels as a data-driven motivation. For classification, we considered data of each subject as a graph, with temporal features as node features and spatial features as edge weights. The graphs were fed into GNNs for training and testing. Experimental results showed that our GNN-based methods realized the best depression recognition performance compared with classical machine-learning methods regarding accuracy, F1 score, and precision, especially in F1 score for over 10%.
- Research Article
- 10.1016/j.commatsci.2024.113358
- Sep 10, 2024
- Computational Materials Science
SGNN-T: Space graph neural network coupled transformer for molecular property prediction
- Research Article
246
- 10.1109/tnnls.2021.3055147
- Aug 1, 2022
- IEEE Transactions on Neural Networks and Learning Systems
Knowledge graph (KG) embedding aims to study the embedding representation to retain the inherent structure of KGs. Graph neural networks (GNNs), as an effective graph representation technique, have shown impressive performance in learning graph embedding. However, KGs have an intrinsic property of heterogeneity, which contains various types of entities and relations. How to address complex graph data and aggregate multiple types of semantic information simultaneously is a critical issue. In this article, a novel heterogeneous GNNs framework based on attention mechanism is proposed. Specifically, the neighbor features of an entity are first aggregated under each relation-path. Then the importance of different relation-paths is learned through the relation features. Finally, each relation-path-based features with the learned weight values are aggregated to generate the embedding representation. Thus, the proposed method not only aggregates entity features from different semantic aspects but also allocates appropriate weights to them. This method can capture various types of semantic information and selectively aggregate informative features. The experiment results on three real-world KGs demonstrate superior performance when compared with several state-of-the-art methods.
- Conference Article
97
- 10.24963/ijcai.2021/353
- Aug 1, 2021
Hypergraph, an expressive structure with flexibility to model the higher-order correlations among entities, has recently attracted increasing attention from various research domains. Despite the success of Graph Neural Networks (GNNs) for graph representation learning, how to adapt the powerful GNN-variants directly into hypergraphs remains a challenging problem. In this paper, we propose UniGNN, a unified framework for interpreting the message passing process in graph and hypergraph neural networks, which can generalize general GNN models into hypergraphs. In this framework, meticulously-designed architectures aiming to deepen GNNs can also be incorporated into hypergraphs with the least effort. Extensive experiments have been conducted to demonstrate the effectiveness of UniGNN on multiple real-world datasets, which outperform the state-of-the-art approaches with a large margin. Especially for the DBLP dataset, we increase the accuracy from 77.4% to 88.8% in the semi-supervised hypernode classification task. We further prove that the proposed message-passing based UniGNN models are at most as powerful as the 1-dimensional Generalized Weisfeiler-Leman (1-GWL) algorithm in terms of distinguishing non-isomorphic hypergraphs. Our code is available at https://github.com/OneForward/UniGNN.
- Research Article
12
- 10.1109/tnnls.2021.3120100
- Aug 1, 2023
- IEEE Transactions on Neural Networks and Learning Systems
Compact representation of graph data is a fundamental problem in pattern recognition and machine learning area. Recently, graph neural networks (GNNs) have been widely studied for graph-structured data representation and learning tasks, such as graph semi-supervised learning, clustering, and low-dimensional embedding. In this article, we present graph propagation-embedding networks (GPENs), a new model for graph-structured data representation and learning problem. GPENs are mainly motivated by 1) revisiting of traditional graph propagation techniques for graph node context-aware feature representation and 2) recent studies on deeply graph embedding and neural network architecture. GPENs integrate both feature propagation on graph and low-dimensional embedding simultaneously into a unified network using a novel propagation-embedding architecture. GPENs have two main advantages. First, GPENs can be well-motivated and explained from feature propagation and deeply learning architecture. Second, the equilibrium representation of the propagation-embedding operation in GPENs has both exact and approximate formulations, both of which have simple closed-form solutions. This guarantees the compactivity and efficiency of GPENs. Third, GPENs can be naturally extended to multiple GPENs (M-GPENs) to address the data with multiple graph structures. Experiments on various semi-supervised learning tasks on several benchmark datasets demonstrate the effectiveness and benefits of the proposed GPENs and M-GPENs.
- Research Article
60
- 10.1016/j.neunet.2018.08.010
- Sep 1, 2018
- Neural Networks
The Vapnik–Chervonenkis dimension of graph and recursive neural networks
- Conference Article
- 10.24963/ijcai.2024/259
- Aug 1, 2024
Graph Neural Network (GNN) is powerful in graph embedding learning, but its performance has been shown to be heavily degraded under adversarial attacks. Deep graph structure learning (GSL) is proposed to defend attack by jointly learning graph structure and graph embedding, typically in node classification task. Label supervision is expensive in real-world applications, and thus unsupervised GSL is more challenging and still remains less studied. To fulfill this gap, this paper proposes a new unsupervised GSL method, i.e., unsupervised property GNN (UPGNN). UPGNN first refines graph structure by exploring properties of low rank, sparsity, feature smoothness. UPGNN employs graph mutual information loss to learn graph embedding by maximizing its correlation with refined graph. The proposed UPGNN learns graph structure and embedding without label supervision, and thus can be applied various downstream tasks. We further propose Accelerated UPGNN (AUPGNN) to reduce computational complexity, providing a efficient alternative to UPGNN. Our extensive experiments on node classification and clustering demonstrate the effectiveness of the proposed method over the state-of-the-arts especially under heavy perturbation.
- Research Article
- 10.1016/j.compbiolchem.2025.108532
- Dec 1, 2025
- Computational biology and chemistry
AI-Driven molecule generation and bioactivity prediction: A multi-model approach combining VAE, graph and language-based neural networks.
- Conference Article
- 10.1109/bcd54882.2022.9900611
- Aug 4, 2022
We investigate works under the propagation-based fake news detection domain, which recently seeks to improve performance through the use of Graph Neural Networks (GNNs). Generally, existing works argue that using GNNs can give results superior to what was obtained using classic graph-based methods. We agree with this argument given that GNNs are capable of gaining superior performance by leveraging node features. But we argue that existing works haven’t identified the fact that the expressivity of GNNs is limited and bounded by node features. Existing works do not acknowledge that, by utilizing GNNs, they implicitly assume node features are strongly correlated to node labels. There are evidence that node features that have been employed do not necessarily correlate to node label. Instead of having a profound theoretical motivation, they have empirically observed that focusing on nodes features with strong feature-label correlation can increase predictive capability. This is a sub-optimal approach to view this problem, in fact, we argue that finding node features based on correlation is not practical or effective. Our first contribution is shifting readers from a node-level view i.e correlating node features with labels, to a graph-level view. In the graph-level view, we exploit the relationship between graph isomorphism and GNNs’ expressivity which can be utilized to well understand and interpret the relation between node features and GNNs’ expressivity. We conduct a wide range of experiments on basis of both node-level view and graph-level view and found graph-level view is more interpretable and strongly matches with results. Further, we gained insights on node features that wouldn’t be obtainable by a node-level view. In order to have a fair and comprehensive analysis of node features, we built a unified dataset that includes a wide range of node features. Our results indicate, as we improve model accuracy on basis of the graph level view, models’ generalizability decreases. We provide our hypothesis for this performance trade-off on the basis of the graph-level view. Our results and insights call for a much broader discussion on whether any sort of filtering method is effective. So, we conclude our work by providing readers with possible solutions that can be helpful to find harmony between node features and GNNs’ expressivity.
- Research Article
18
- 10.1021/acs.jcim.2c01564
- Mar 31, 2023
- Journal of Chemical Information and Modeling
The n-octanol/buffer solution distribution coefficient at pH = 7.4 (log D7.4) is an indicator of lipophilicity, and it influences a wide variety of absorption, distribution, metabolism, excretion, and toxicity (ADMET) properties and druggability of compounds. In log D7.4 prediction, graph neural networks (GNNs) can uncover subtle structure-property relationships (SPRs) by automatically extracting features from molecular graphs that facilitate the learning of SPRs, but their performances are often limited by the small size of available datasets. Herein, we present a transfer learning strategy called pretraining on computational data and then fine-tuning on experimental data (PCFE) to fully exploit the predictive potential of GNNs. PCFE works by pretraining a GNN model on 1.71 million computational log D data (low-fidelity data) and then fine-tuning it on 19,155 experimental log D7.4 data (high-fidelity data). The experiments for three GNN architectures (graph convolutional network (GCN), graph attention network (GAT), and Attentive FP) demonstrated the effectiveness of PCFE in improving GNNs for log D7.4 predictions. Moreover, the optimal PCFE-trained GNN model (cx-Attentive FP, Rtest2 = 0.909) outperformed four excellent descriptor-based models (random forest (RF), gradient boosting (GB), support vector machine (SVM), and extreme gradient boosting (XGBoost)). The robustness of the cx-Attentive FP model was also confirmed by evaluating the models with different training data sizes and dataset splitting strategies. Therefore, we developed a webserver and defined the applicability domain for this model. The webserver (http://tools.scbdd.com/chemlogd/) provides free log D7.4 prediction services. In addition, the important descriptors for log D7.4 were detected by the Shapley additive explanations (SHAP) method, and the most relevant substructures of log D7.4 were identified by the attention mechanism. Finally, the matched molecular pair analysis (MMPA) was performed to summarize the contributions of common chemical substituents to log D7.4, including a variety of hydrocarbon groups, halogen groups, heteroatoms, and polar groups. In conclusion, we believe that the cx-Attentive FP model can serve as a reliable tool to predict log D7.4 and hope that pretraining on low-fidelity data can help GNNs make accurate predictions of other endpoints in drug discovery.
- Research Article
- 10.1002/aidi.202500061
- Jun 4, 2025
- Advanced Intelligent Discovery
Graph neural networks (GNNs) have been widely used in materials science due to their ability to process complex graph‐structured data by capturing the relationships between atoms, molecules, or crystal structures within materials. However, due to the lack of interpretability, GNN acts as a “black box” model in most cases. In this work, by introducing the attention mechanism into GNN, a graph attention neural network (GANN) model for 2D materials is proposed, which not only reaches accurate prediction performances but also shows interpretability via attention mechanism analysis. Taking Janus III–VI van der Waals heterostructures as a representative case, the MAE for predicting formation energy, lattice constants, PBE bandgap, and HSE06 bandgap are 0.009 eV, 0.003 Å, 0.087, and 0.123 eV, respectively. Remarkably, the GANN model shows outstanding generalizable ability that can achieve accurate prediction using guessed input structures without fully structural relaxation to reach the ground state for Janus III–VI vdW heterostructures. Furthermore, the attention mechanisms of integration enable qualitative analysis of the contributions of the nearest neighboring atoms, endowing the GNN model with enhanced interpretability. Our findings offer novel perspectives for AI‐driven 2D material design by establishing an optimal balance between predictive accuracy and model interpretability in GNN approaches.
- Research Article
54
- 10.1016/j.est.2022.106437
- Dec 22, 2022
- Journal of Energy Storage
A novel graph-based framework for state of health prediction of lithium-ion battery
- Conference Article
13
- 10.1109/iccad51958.2021.9643549
- Nov 1, 2021
Graph Neural Networks (GNNs) have emerged as the state-of-the-art (SOTA) method for graph-based learning tasks. However, it still remains prohibitively challenging to inference GNNs over large graph datasets, limiting their application to large-scale real-world tasks. While end-to-end jointly optimizing GNNs and their accelerators is promising in boosting GNNs' inference efficiency and expediting the design process, it is still underexplored due to the vast and distinct design spaces of GNNs and their accelerators. In this work, we propose G-CoS, a GNN and accelerator co-search framework that can automatically search for matched GNN structures and accelerators to maximize both task accuracy and acceleration efficiency. Specifically, GCoS integrates two major enabling components: (1) a generic GNN accelerator search space which is applicable to various GNN structures and (2) a one-shot GNN and accelerator co-search algorithm that enables simultaneous and efficient search for optimal GNN structures and their matched accelerators. To the best of our knowledge, G-CoS is the first co-search framework for GNNs and their accelerators. Extensive experiments and ablation studies show that the GNNs and accelerators generated by G-CoS consistently outperform SOTA GNNs and GNN accelerators in terms of both task accuracy and hardware efficiency, while only requiring a few hours for the end-to-end generation of the best matched GNNs and their accelerators.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.