Graph neural networks for molecular dynamics simulations.

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon

Graph neural networks for molecular dynamics simulations.

Similar Papers
  • Research Article
  • Cite Count Icon 81
  • 10.1016/j.neunet.2018.08.010
The Vapnik–Chervonenkis dimension of graph and recursive neural networks
  • Sep 1, 2018
  • Neural Networks
  • Franco Scarselli + 2 more

The Vapnik–Chervonenkis dimension of graph and recursive neural networks

  • Conference Article
  • Cite Count Icon 171
  • 10.24963/ijcai.2021/353
UniGNN: a Unified Framework for Graph and Hypergraph Neural Networks
  • Aug 1, 2021
  • Jing Huang + 1 more

Hypergraph, an expressive structure with flexibility to model the higher-order correlations among entities, has recently attracted increasing attention from various research domains. Despite the success of Graph Neural Networks (GNNs) for graph representation learning, how to adapt the powerful GNN-variants directly into hypergraphs remains a challenging problem. In this paper, we propose UniGNN, a unified framework for interpreting the message passing process in graph and hypergraph neural networks, which can generalize general GNN models into hypergraphs. In this framework, meticulously-designed architectures aiming to deepen GNNs can also be incorporated into hypergraphs with the least effort. Extensive experiments have been conducted to demonstrate the effectiveness of UniGNN on multiple real-world datasets, which outperform the state-of-the-art approaches with a large margin. Especially for the DBLP dataset, we increase the accuracy from 77.4% to 88.8% in the semi-supervised hypernode classification task. We further prove that the proposed message-passing based UniGNN models are at most as powerful as the 1-dimensional Generalized Weisfeiler-Leman (1-GWL) algorithm in terms of distinguishing non-isomorphic hypergraphs. Our code is available at https://github.com/OneForward/UniGNN.

  • Research Article
  • 10.1088/2632-2153/addfaa
Interpretation of chemical reaction yields with graph neural additive network
  • Jun 10, 2025
  • Machine Learning: Science and Technology
  • Youngchun Kwon + 3 more

Prediction of chemical yields is crucial for exploring untapped chemical reactions and optimizing synthetic pathways for targeted compounds. Recently, graph neural networks have proven successful in achieving high predictive accuracy. However, they remain intrinsically black-box models, offering limited interpretability. Understanding how each reaction component contributes to the yield of a chemical reaction can help identify critical factors driving the success or failure of reactions, thereby potentially revealing opportunities for yield optimization. In this study, we present a novel method for interpretable chemical reaction yield prediction, which represents the yield of a chemical reaction as a simple summation of component-wise contributions from individual reaction components. To build an interpretable prediction model, we introduce a graph neural additive network architecture, wherein shared neural networks process individual reaction components in an input reaction while leveraging a reaction-level embedding to derive their respective contributions. The predicted yield is obtained by summing these component-wise contributions. The model is trained using a learning objective designed to effectively quantify the contributions of individual components by amplifying the influence of significant components and suppressing that of less influential components. The experimental results on benchmark datasets demonstrated that the proposed method achieved both high predictive accuracy and interpretability, making it suitable for practical use in synthetic pathway design for real-world applications.

  • Research Article
  • Cite Count Icon 103
  • 10.1093/bib/bbab041
ATSE: a peptide toxicity predictor by exploiting structural and evolutionary information based on graph neural network and attention mechanism.
  • Apr 5, 2021
  • Briefings in Bioinformatics
  • Lesong Wei + 4 more

Peptides have recently emerged as promising therapeutic agents against various diseases. For both research and safety regulation purposes, it is of high importance to develop computational methods to accurately predict the potential toxicity of peptides within the vast number of candidate peptides. In this study, we proposed ATSE, a peptide toxicity predictor by exploiting structural and evolutionary information based on graph neural networks and attention mechanism. More specifically, it consists of four modules: (i) a sequence processing module for converting peptide sequences to molecular graphs and evolutionary profiles, (ii) a feature extraction module designed to learn discriminative features from graph structural information and evolutionary information, (iii) an attention module employed to optimize the features and (iv) an output module determining a peptide as toxic or non-toxic, using optimized features from the attention module. Comparative studies demonstrate that the proposed ATSE significantly outperforms all other competing methods. We found that structural information is complementary to the evolutionary information, effectively improving the predictive performance. Importantly, the data-driven features learned by ATSE can be interpreted and visualized, providing additional information for further analysis. Moreover, we present a user-friendly online computational platform that implements the proposed ATSE, which is now available at http://server.malab.cn/ATSE. We expect that it can be a powerful and useful tool for researchers of interest.

  • Research Article
  • Cite Count Icon 3
  • 10.1109/access.2021.3050541
ATPGNN: Reconstruction of Neighborhood in Graph Neural Networks With Attention-Based Topological Patterns
  • Jan 1, 2021
  • IEEE Access
  • Kehao Wang + 7 more

Graph Neural Networks (GNNs) have been applied in many fields of semi-supervised node classification for non-Euclidean data. However, some GNNs cannot make good use of positive information brought by nodes which are far away from each central node for aggregation operations. These remote nodes with positive information can enhance the representation of the central node. Some GNNs also ignore rich structure information around each central node's surroundings or entire network. Besides, most of GNNs have a fixed architecture and cannot change their components to adapt to different tasks. In this article, we propose a semi-supervised learning platform ATPGNN with three variable components to overcome the above shortcomings. This novel model can fully adapt to different tasks by changing its components and support inductive learning. The key idea is that we first create a high-order topology graph, which is from similarity of node structure information. Specifically, we reconstruct the relationships between nodes in a potential space obtained by network embedding in graph. Second, we introduce graph representation learning methods to extract representation information of remote nodes on the high-order topology graph. Third, we use some network embedding methods to get graph structure information of each node. Finally, we combine the representation information of remote nodes, graph structure information and feature for each node by attention mechanism, and apply them to learning node representation in graph. Extensive experiments on real attributed networks demonstrate the superiority of the proposed model against traditional GNNs.

  • Research Article
  • Cite Count Icon 2
  • 10.1016/j.compbiolchem.2025.108532
AI-Driven molecule generation and bioactivity prediction: A multi-model approach combining VAE, graph and language-based neural networks.
  • Dec 1, 2025
  • Computational biology and chemistry
  • Latefa Oulladji + 4 more

AI-Driven molecule generation and bioactivity prediction: A multi-model approach combining VAE, graph and language-based neural networks.

  • Conference Article
  • 10.1109/ictai56018.2022.00068
Hybrid Structure Encoding Graph Neural Networks with Attention Mechanism for Link Prediction
  • Oct 1, 2022
  • Man Hu + 3 more

Graph Neural Networks (GNNs) have been widely applied to link prediction tasks. GNN models generally follow a message passing scheme to recursively aggregate the attribute features of neighbor nodes. In this scheme, the GNN does not explicitly consider the structural information of the graph that is critical for link prediction. This inspires researchers to encode this information to consider the location of nodes and their roles. However, current studies mostly adopt a single encoding method, which does not sufficiently consider the structural information. In additional, the structural and attribute features are not effectively integrated. In this paper, we propose a novel framework named Hybrid Structure Encoding Graph neural networks with Attention mechanism (HSEGA) for link prediction. HSEGA uses PageRank, betweenness centrality, and node labeling for hybrid encoding of structural information to capture the importance, centrality, and location of graph nodes. Subsequently, the structural and attribute features are integrated as deep GNN inputs to learn from two domains. Finally, we use the attention mechanism to adaptively incorporate information. Extensive experiments on diverse benchmark datasets show that HSEGA consistently achieves state-of-the-art link prediction performance.

  • Research Article
  • Cite Count Icon 2
  • 10.1088/2632-2153/ad979b
SICGNN: Structurally informed convolutional graph neural networks for protein classification
  • Nov 26, 2024
  • Machine Learning: Science and Technology
  • Yonghyun Lee + 3 more

Recently, graph neural networks (GNNs) have been widely used in various domains, including social networks, recommender systems, protein classification, molecular property prediction, and genetic networks. In bioinformatics and chemical engineering, considerable research is being actively conducted to represent molecules or proteins on graphs by conceptualizing atoms or amino acids as nodes and the relationships between nodes as edges. The overall structures of proteins and their interconnections are crucial for predicting and classifying their properties. However, as GNNs stack more layers to create deeper networks, the embeddings between nodes may become excessively similar, causing an oversmoothing problem that reduces the performance for downstream tasks. To avoid this, GNNs typically use a limited number of layers, which leads to the problem of reflecting only the local structure and neighborhood information rather than the global structure of the graph. Therefore, we propose a structurally informed convolutional GNN (SICGNN) that utilizes information that can express the overall topological structure of a protein graph during GNN training and prediction. By explicitly including information of the entire graph topology, the proposed model can utilize both local neighborhood and global structural information. We applied the SICGNN to representative GNNs such as GraphSAGE, graph isomorphism network, and graph attention network, and confirmed performance improvements across various datasets. We also demonstrate the robustness of SICGNN using multiple stratified 10-fold cross-validations and various hyperparameter settings, and demonstrate that its accuracy is comparable or better than those of existing GNN models.

  • Research Article
  • Cite Count Icon 19
  • 10.1016/j.ijar.2023.01.001
Graph neural networks induced by concept lattices for classification
  • Jan 9, 2023
  • International Journal of Approximate Reasoning
  • Mingwen Shao + 3 more

Graph neural networks induced by concept lattices for classification

  • Research Article
  • Cite Count Icon 139
  • 10.1038/s41524-021-00543-3
Accurate and scalable graph neural network force field and molecular dynamics with direct force architecture
  • May 21, 2021
  • npj Computational Materials
  • Cheol Woo Park + 5 more

Recently, machine learning (ML) has been used to address the computational cost that has been limiting ab initio molecular dynamics (AIMD). Here, we present GNNFF, a graph neural network framework to directly predict atomic forces from automatically extracted features of the local atomic environment that are translationally-invariant, but rotationally-covariant to the coordinate of the atoms. We demonstrate that GNNFF not only achieves high performance in terms of force prediction accuracy and computational speed on various materials systems, but also accurately predicts the forces of a large MD system after being trained on forces obtained from a smaller system. Finally, we use our framework to perform an MD simulation of Li7P3S11, a superionic conductor, and show that resulting Li diffusion coefficient is within 14% of that obtained directly from AIMD. The high performance exhibited by GNNFF can be easily generalized to study atomistic level dynamics of other material systems.

  • Research Article
  • Cite Count Icon 23
  • 10.1016/j.inffus.2024.102748
Vul-LMGNNs: Fusing language models and online-distilled graph neural networks for code vulnerability detection
  • Oct 21, 2024
  • Information Fusion
  • Ruitong Liu + 6 more

Vul-LMGNNs: Fusing language models and online-distilled graph neural networks for code vulnerability detection

  • Research Article
  • Cite Count Icon 9
  • 10.1016/j.jksuci.2023.101865
Enhancing source code retrieval with joint Bi-LSTM-GNN architecture: A comparative study with ChatGPT-LLM
  • Dec 14, 2023
  • Journal of King Saud University - Computer and Information Sciences
  • Nazia Bibi + 2 more

Enhancing source code retrieval with joint Bi-LSTM-GNN architecture: A comparative study with ChatGPT-LLM

  • Supplementary Content
  • Cite Count Icon 1
  • 10.3389/frai.2025.1716706
Multimodal graph neural networks in healthcare: a review of fusion strategies across biomedical domains
  • Jan 9, 2026
  • Frontiers in Artificial Intelligence
  • Maria Vaida + 1 more

Graph Neural Networks (GNNs) have transformed multimodal healthcare data integration by capturing complex, non-Euclidean relationships across diverse sources such as electronic health records, medical imaging, genomic profiles, and clinical notes. This review synthesizes GNN applications in healthcare, highlighting their impact on clinical decision-making through multimodal integration, advanced fusion strategies, and attention mechanisms. Key applications include drug interaction and discovery, cancer detection and prognosis, clinical status prediction, infectious disease modeling, genomics, and the diagnosis of mental health and neurological disorders. Various GNN architectures demonstrate consistent applications in modeling both intra- and intermodal relationships. GNN architectures, such as Graph Convolutional Networks and Graph Attention Networks, are integrated with Convolutional Neural Networks (CNNs), transformer-based models, temporal encoders, and optimization algorithms to facilitate robust multimodal integration. Early, intermediate, late, and hybrid fusion strategies, enhanced by attention mechanisms like multi-head attention, enable dynamic prioritization of critical relationships, improving accuracy and interpretability. However, challenges remain, including data heterogeneity, computational demands, and the need for greater interpretability. Addressing these challenges presents opportunities to advance GNN adoption in medicine through scalable, transparent GNN models.

  • Research Article
  • Cite Count Icon 2
  • 10.1002/aidi.202500061
Graph Attention Neural Networks for Interpretable and Generalizable Prediction of Janus III–Vi Van Der Waals Heterostructures
  • Jun 4, 2025
  • Advanced Intelligent Discovery
  • Yudong Shi + 7 more

Graph neural networks (GNNs) have been widely used in materials science due to their ability to process complex graph‐structured data by capturing the relationships between atoms, molecules, or crystal structures within materials. However, due to the lack of interpretability, GNN acts as a “black box” model in most cases. In this work, by introducing the attention mechanism into GNN, a graph attention neural network (GANN) model for 2D materials is proposed, which not only reaches accurate prediction performances but also shows interpretability via attention mechanism analysis. Taking Janus III–VI van der Waals heterostructures as a representative case, the MAE for predicting formation energy, lattice constants, PBE bandgap, and HSE06 bandgap are 0.009 eV, 0.003 Å, 0.087, and 0.123 eV, respectively. Remarkably, the GANN model shows outstanding generalizable ability that can achieve accurate prediction using guessed input structures without fully structural relaxation to reach the ground state for Janus III–VI vdW heterostructures. Furthermore, the attention mechanisms of integration enable qualitative analysis of the contributions of the nearest neighboring atoms, endowing the GNN model with enhanced interpretability. Our findings offer novel perspectives for AI‐driven 2D material design by establishing an optimal balance between predictive accuracy and model interpretability in GNN approaches.

  • Research Article
  • Cite Count Icon 23
  • 10.1016/j.eswa.2021.114655
Node classification using kernel propagation in graph neural networks
  • Feb 4, 2021
  • Expert Systems with Applications
  • Sakthi Kumar Arul Prakash + 1 more

Node classification using kernel propagation in graph neural networks

Save Icon
Up Arrow
Open/Close