MAGNET: an open-source library for mesh agglomeration by graph neural networks

  • Abstract
  • Literature Map
  • Similar Papers
Abstract
Translate article icon Translate Article Star icon
Take notes icon Take Notes

Abstract We introduce , an open-source Python library designed for mesh agglomeration in both two- and three-dimensions, based on employing Graph Neural Networks (GNN). serves as a comprehensive solution for training a variety of GNN models, integrating deep learning and other advanced algorithms such as METIS and k-means to facilitate mesh agglomeration and quality metric computation. The library’s introduction is outlined through its code structure and primary features. The GNN framework adopts a graph bisection methodology that capitalizes on connectivity and geometric mesh information via SAGE convolutional layers, in line with the methodology proposed in (Antonietti and Manuzzi in J Comput Phys 452:110900, 2022; Antonietti et al. in Polytopal mesh agglomeration via geometrical deep learning for three-dimensional heterogeneous domains, arXiv:2406.10587 , 2024). Additionally, the proposed library incorporates reinforcement learning to enhance the accuracy and robustness of the model initially suggested in [1, 2] for predicting coarse partitions within a multilevel framework. A detailed tutorial is provided to guide the user through the process of mesh agglomeration and the training of a GNN bisection model. We present several examples of mesh agglomeration conducted by , demonstrating the library’s applicability across various scenarios. Furthermore, the performance of the newly introduced models is contrasted with that of METIS and k-means, illustrating that the proposed GNN models are competitive regarding partition quality and computational efficiency. Finally, we exhibit the versatility of ’s interface through its integration with , an open-source library implementing discontinuous Galerkin methods on polytopal grids for the numerical discretization of multiphysics differential problems.

Similar Papers
  • PDF Download Icon
  • Research Article
  • Cite Count Icon 10
  • 10.1186/s12911-024-02450-1
Prediction of emergency department revisits among child and youth mental health outpatients using deep learning techniques
  • Feb 8, 2024
  • BMC Medical Informatics and Decision Making
  • Simran Saggu + 8 more

BackgroundThe proportion of Canadian youth seeking mental health support from an emergency department (ED) has risen in recent years. As EDs typically address urgent mental health crises, revisiting an ED may represent unmet mental health needs. Accurate ED revisit prediction could aid early intervention and ensure efficient healthcare resource allocation. We examine the potential increased accuracy and performance of graph neural network (GNN) machine learning models compared to recurrent neural network (RNN), and baseline conventional machine learning and regression models for predicting ED revisit in electronic health record (EHR) data.MethodsThis study used EHR data for children and youth aged 4–17 seeking services at McMaster Children’s Hospital’s Child and Youth Mental Health Program outpatient service to develop and evaluate GNN and RNN models to predict whether a child/youth with an ED visit had an ED revisit within 30 days. GNN and RNN models were developed and compared against conventional baseline models. Model performance for GNN, RNN, XGBoost, decision tree and logistic regression models was evaluated using F1 scores.ResultsThe GNN model outperformed the RNN model by an F1-score increase of 0.0511 and the best performing conventional machine learning model by an F1-score increase of 0.0470. Precision, recall, receiver operating characteristic (ROC) curves, and positive and negative predictive values showed that the GNN model performed the best, and the RNN model performed similarly to the XGBoost model. Performance increases were most noticeable for recall and negative predictive value than for precision and positive predictive value.ConclusionsThis study demonstrates the improved accuracy and potential utility of GNN models in predicting ED revisits among children and youth, although model performance may not be sufficient for clinical implementation. Given the improvements in recall and negative predictive value, GNN models should be further explored to develop algorithms that can inform clinical decision-making in ways that facilitate targeted interventions, optimize resource allocation, and improve outcomes for children and youth.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 148
  • 10.1186/s40537-023-00876-4
A review of graph neural networks: concepts, architectures, techniques, challenges, datasets, applications, and future directions
  • Jan 16, 2024
  • Journal of Big Data
  • Bharti Khemani + 3 more

Deep learning has seen significant growth recently and is now applied to a wide range of conventional use cases, including graphs. Graph data provides relational information between elements and is a standard data format for various machine learning and deep learning tasks. Models that can learn from such inputs are essential for working with graph data effectively. This paper identifies nodes and edges within specific applications, such as text, entities, and relations, to create graph structures. Different applications may require various graph neural network (GNN) models. GNNs facilitate the exchange of information between nodes in a graph, enabling them to understand dependencies within the nodes and edges. The paper delves into specific GNN models like graph convolution networks (GCNs), GraphSAGE, and graph attention networks (GATs), which are widely used in various applications today. It also discusses the message-passing mechanism employed by GNN models and examines the strengths and limitations of these models in different domains. Furthermore, the paper explores the diverse applications of GNNs, the datasets commonly used with them, and the Python libraries that support GNN models. It offers an extensive overview of the landscape of GNN research and its practical implementations.

  • Research Article
  • Cite Count Icon 1
  • 10.1158/1538-7445.am2022-1922
Abstract 1922: Application of an interpretable graph neural network to predict gene expression signatures associated with tertiary lymphoid structures in histopathological images
  • Jun 15, 2022
  • Cancer Research
  • Ciyue Shen + 10 more

Background: Tertiary lymphoid structures (TLS) are vascularized lymphocyte aggregates in the tumor microenvironment (TME) that correlate with better patient outcomes. Previous studies identified a 12 chemokine gene expression signature associated with disease progression and the type and degree of TLS. These signatures could provide insight important for clinical decision making during pathologic evaluation, but predicting gene expression from whole slide images (WSI) may be impeded by low prediction accuracy and lack of interpretability. Here we report an artificial intelligence (AI)-based, state-of-the-art workflow to predict the 12-chemokine TLS gene signature from lung cancer WSI, and identify histological features relevant to model predictions. Methods: Models were trained using 538 cases of paired lung cancer WSI and mRNA-seq expression data (The Cancer Genome Atlas). Cell and tissue classifiers, based on convolutional neural networks (CNN) were trained on WSI, and a graph neural network (GNN) model that leverages the relative spatial arrangement of the CNN-identified cells and tissues was used to predict gene expression. GNN predictions of TLS signature genes were compared with the predictions of models trained using hand-crafted, task-specific features (TLS feature models) describing the number, size, and cellular composition of identified TLS. The Pearson correlation coefficient was used to assess the accuracy of GNN and TLS feature model predictions. GNNExplainer1, a tool that simultaneously identifies a subgraph and a subset of node features important for predictions, was applied to interpret the GNN model predictions. Results: GNN model predictions show reasonable accuracy: GNN models significantly predicted mRNA expression of all 12 genes (p<0.05), and the predicted expression of six genes was moderately correlated with ground-truth measurements (Pearson-r>0.5). The correlation of GNN predictions was higher than that of the TLS feature models for all 12 signature genes. The GNNExplainer identified relevant features including the mean and standard deviation of lymphocyte count, and fraction of lymphocytes in cancer stroma. Subgraphs selected by the GNNExplainer focus on, but extend beyond, regions of human-annotated TLS objects, indicating that TLS may influence gene expression and the TME in regions beyond their immediate vicinity. Conclusion: Here, we show a comparison of two interpretable AI methods for the prediction of TLS-induced gene expression from WSI. The outperforming GNN-based approach is highly reproducible and accurate, predicting histopathology features relevant to TLS that may be used to inform patient prognosis and treatment. These methods could be applied to predict additional clinically relevant transcriptomic signatures. 1. ​​Ying, R, et al. 2019. arXiv:1903.03894v4 Citation Format: Ciyue Shen, Collin Schlager, Deepta Rajan, Maryam Pouryahya, Mary Lin, Victoria Mountain, Ilan Wapinski, Amaro Taylor-Weiner, Benjamin Glass, Robert Egger, Andrew Beck. Application of an interpretable graph neural network to predict gene expression signatures associated with tertiary lymphoid structures in histopathological images [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2022; 2022 Apr 8-13. Philadelphia (PA): AACR; Cancer Res 2022;82(12_Suppl):Abstract nr 1922.

  • Research Article
  • Cite Count Icon 18
  • 10.1109/tpami.2023.3303431
Parallel and Distributed Graph Neural Networks: An In-Depth Concurrency Analysis.
  • May 1, 2024
  • IEEE Transactions on Pattern Analysis and Machine Intelligence
  • Maciej Besta + 1 more

Graph neural networks (GNNs) are among the most powerful tools in deep learning. They routinely solve complex problems on unstructured networks, such as node classification, graph classification, or link prediction, with high accuracy. However, both inference and training of GNNs are complex, and they uniquely combine the features of irregular graph processing with dense and regular computations. This complexity makes it very challenging to execute GNNs efficiently on modern massively parallel architectures. To alleviate this, we first design a taxonomy of parallelism in GNNs, considering data and model parallelism, and different forms of pipelining. Then, we use this taxonomy to investigate the amount of parallelism in numerous GNN models, GNN-driven machine learning tasks, software frameworks, or hardware accelerators. We use the work-depth model, and we also assess communication volume and synchronization. We specifically focus on the sparsity/density of the associated tensors, in order to understand how to effectively apply techniques such as vectorization. We also formally analyze GNN pipelining, and we generalize the established Message-Passing class of GNN models to cover arbitrary pipeline depths, facilitating future optimizations. Finally, we investigate different forms of asynchronicity, navigating the path for future asynchronous parallel GNN pipelines. The outcomes of our analysis are synthesized in a set of insights that help to maximize GNN performance, and a comprehensive list of challenges and opportunities for further research into efficient GNN computations. Our work will help to advance the design of future GNNs.

  • Conference Article
  • Cite Count Icon 26
  • 10.1109/isdfs52919.2021.9486352
Watermarking Graph Neural Networks by Random Graphs
  • Jun 28, 2021
  • Xiangyu Zhao + 2 more

Many learning tasks require us to deal with graph data which contains rich relational information among elements, leading increasing graph neural network (GNN) models to be deployed in industrial products for improving the quality of service. However, they also raise challenges to model authentication. It is necessary to protect the ownership of the GNN models, which motivates us to present a watermarking method to GNN models in this paper. In the proposed method, an Erdos-Renyi (ER) random graph with random node feature vectors and labels is randomly generated as a trigger to train the GNN to be protected together with the normal samples. During model training, the secret watermark is embedded into the label predictions of the ER graph nodes. During model verification, by activating a marked GNN with the trigger ER graph, the watermark can be reconstructed from the output to verify the ownership. Since the ER graph was randomly generated, by feeding it to a non-marked GNN, the label predictions of the graph nodes are random, resulting in a low false alarm rate (of the proposed work). Experimental results have also shown that, the performance of a marked GNN on its original task will not be impaired. Moreover, it is robust against model compression and fine-tuning, which has shown the superiority and applicability.

  • Research Article
  • Cite Count Icon 2
  • 10.1609/aaai.v36i6.20623
Algorithmic Concept-Based Explainable Reasoning
  • Jun 28, 2022
  • Proceedings of the AAAI Conference on Artificial Intelligence
  • Dobrik Georgiev + 4 more

Recent research on graph neural network (GNN) models successfully applied GNNs to classical graph algorithms and combinatorial optimisation problems. This has numerous benefits, such as allowing applications of algorithms when preconditions are not satisfied, or reusing learned models when sufficient training data is not available or can't be generated. Unfortunately, a key hindrance of these approaches is their lack of explainability, since GNNs are black-box models that cannot be interpreted directly. In this work, we address this limitation by applying existing work on concept-based explanations to GNN models. We introduce concept-bottleneck GNNs, which rely on a modification to the GNN readout mechanism. Using three case studies we demonstrate that: (i) our proposed model is capable of accurately learning concepts and extracting propositional formulas based on the learned concepts for each target class; (ii) our concept-based GNN models achieve comparative performance with state-of-the-art models; (iii) we can derive global graph concepts, without explicitly providing any supervision on graph-level concepts.

  • PDF Download Icon
  • Research Article
  • Cite Count Icon 36
  • 10.1016/j.ress.2023.109341
Geometric deep learning for online prediction of cascading failures in power grids
  • Sep 1, 2023
  • Reliability Engineering & System Safety
  • Anna Varbella + 2 more

Past events have revealed that widespread blackouts are mostly a result of cascading failures in the power grid. Understanding the underlining mechanisms of cascading failures can help in developing strategies to minimize the risk of such events. Moreover, a real-time detection of precursors to cascading failures will help operators take measures to prevent their propagation. Currently, the well-established probabilistic and physics-based models of cascading failures offer low computational efficiency, hindering them to be used only as offline tools. In this work, we develop a data-driven methodology for online estimation of the risk of cascading failures. We utilize a physics-based cascading failure model to generate a cascading failure dataset considering different operating conditions and failure scenarios, thus obtaining a sample space covering a large set of power grid states that are labeled as safe or unsafe. We use the synthetic data to train deep learning architectures, namely Feed-forward Neural Networks (FNN) and Graph Neural Networks (GNN). With the development of GNNs, improved performance is achieved with graph-structured data, and GNNs can generalize to graphs of diverse sizes. A comparison between FNN and GNN is made and the GNNs inductive capability is demonstrated via test grids. Furthermore, we apply transfer learning to improve the performance of a pre-trained GNN model on power grids not seen in the training process. The GNN model shows accuracy and balanced accuracy above 96% on selected test datasets not used in the training. Conversely, the FNN shows accuracy above 85% and balanced accuracy above 81% on test datasets unseen during training. Overall, the GNN model is successful in determining, if one or several simultaneous outages result in a critical grid state, under specific grid operating conditions.

  • Research Article
  • Cite Count Icon 2
  • 10.1145/3691636
Graph-OPU: A Highly Flexible FPGA-Based Overlay Processor for Graph Neural Networks
  • Nov 18, 2024
  • ACM Transactions on Reconfigurable Technology and Systems
  • Enhao Tang + 7 more

Field-programmable gate arrays (FPGAs) are an ideal candidate for accelerating graph neural networks (GNNs). However, the FPGA redeployment process is time-consuming when updating or switching between diverse GNN models across different applications. Existing GNN processors eliminate the need for FPGA redeployment when switching between different GNN models. However, adapting matrix multiplication types by switching processing units decreases hardware utilization. In addition, the bandwidth of DDR limits further improvements in hardware performance. This article proposes a highly flexible FPGA-based overlay processor for GNN accelerations. Graph-OPU provides excellent flexibility and programmability for users, as the executable code of GNN models is automatically compiled and reloaded without requiring FPGA redeployment. First, we customize the compiler and instruction sets for the inference process of different GNN models. Second, we customize the datapath and optimize the data format in the microarchitecture to fully leverage the advantages of high bandwidth memory (HBM). Third, we design a unified matrix multiplication to handle both sparse-dense matrix multiplication (SpMM) and general matrix multiplication (GEMM), enhancing Graph-OPU performance. During Graph-OPU execution, the computational units are shared between SpMM and GEMM instead of being switched, which improves the hardware utilization. Finally, we implement a hardware prototype on the Xilinx Alveo U50 and test the mainstream GNN models using various datasets. Experimental results show that Graph-OPU achieves up to 1,654 \(\times\) and 63 \(\times\) speedup, as well as up to 5,305 \(\times\) and 422 \(\times\) energy efficiency boosts, compared to implementations on CPU and GPU, respectively. Graph-OPU outperforms state-of-the-art (SOTA) end-to-end overlay accelerators for GNN, reducing latency by an average of 1.36 \(\times\) and improving energy efficiency by 1.41 \(\times\) on average. Moreover, Graph-OPU exhibits an average 1.45 \(\times\) speed improvement in end-to-end latency over the SOTA GNN processor. Graph-OPU represents an in-depth study of an FPGA-based overlay processor for GNNs, offering high flexibility, speedup, and energy efficiency.

  • Research Article
  • 10.1021/acs.jcim.4c01689
Robust Lightweight Graph Neural Network Framework for Accelerating Crystal Structure Prediction.
  • Jun 30, 2025
  • Journal of chemical information and modeling
  • Rushikesh Pawar + 3 more

This work presents a crystal structure prediction framework that employs a structural search using a derivative-free optimization method, with a supervised Graph Neural Network (GNN) model as the energy evaluator. We address the limitations of existing GNN-based crystal structure prediction (CSP) frameworks and propose methods for designing a robust and computationally efficient predictor. In particular, we first highlight the often-overlooked sensitivity of GNN models to weight initialization in crystal structure prediction, and to address this, we introduce a model selection framework that consistently identifies an appropriate GNN model for downstream crystal structure prediction tasks. Using this framework, we conduct a meaningful comparison of multiple GNN architectures for CSP involving a Bayesian optimization approach. Furthermore, we propose a data augmentation strategy that incorporates unrelaxed structures in the supervised training process, and additionally explore the impact of unsupervised GNN pretraining with and without augmentation on crystal structure prediction. Finally, we demonstrate that our proposed crystal structure prediction framework, in conjunction with the lightweight GNN architecture CGCNN, can achieve a level of performance comparable to that of more complex GNN architectures, which are typically computationally expensive to train and infer. The approaches introduced in this work are generic and can be extended to any GNN-based crystal structure prediction framework, paving the way for developing novel and high-throughput crystal structure predictors in the future.

  • Conference Article
  • Cite Count Icon 18
  • 10.1109/bigdata50022.2020.9378164
Graph Neural Networks for COVID-19 Drug Discovery
  • Dec 10, 2020
  • Mark Cheung + 1 more

Deep learning has led to major advances in fields like natural language processing, computer vision, and other Euclidean data domains. Yet, many important fields have data defined on irregular domains, requiring graphs to be explicitly modeled. One such application is drug discovery. Recently, research has found that using graph neural network (GNN) models, given enough data, can perform better than using human-engineered fingerprints or descriptors in predicting molecular properties of potential antibiotics.We explore these state-of-the-art AI models on predicting desirable molecular properties for drugs that can inhibit SARS-CoV-2. We build upon the GNN models with ideas from recent breakthroughs in geometric deep learning, inspired by the topologies of the molecules. In this poster paper, we present an overview of the drug discovery framework, drug-target interaction framework, and GNNs. Preliminary results on two COVID-19 related datasets are encouraging, achieving a ROC-AUC of 0.72 for FDA-approved chemical library screened against SARS-CoV-2 in vitro.

  • Conference Article
  • Cite Count Icon 50
  • 10.1109/sibgrapi51738.2020.00035
Superpixel Image Classification with Graph Attention Networks
  • Sep 6, 2020
  • Pedro H C Avelar + 4 more

This paper presents a methodology for image classification using Graph Neural Network (GNN) models. We transform the input images into region adjacency graphs (RAGs), in which regions are superpixels and edges connect neighboring superpixels. Our experiments suggest that Graph Attention Networks (GATs), which combine graph convolutions with self-attention mechanisms, outperforms other GNN models. Although raw image classifiers perform better than GATs due to in-formation loss during the RAG generation, our methodology opens an interesting avenue of research on deep learning beyond rectangular-gridded images, such as 360-degree field of view panoramas. Traditional convolutional kernels of current state-of-the-art methods cannot handle panoramas, whereas the adapted superpixel algorithms and the resulting region adjacency graphs can naturally feed a GNN, without topology issues.

  • Research Article
  • 10.1016/j.vlsi.2024.102262
Pre-route timing prediction and optimization with graph neural network models
  • Aug 1, 2024
  • Integration
  • Kyungjoon Chang + 1 more

Pre-route timing prediction and optimization with graph neural network models

  • Research Article
  • 10.1007/s11095-025-03848-w
GraphDeep-hERG: Graph Neural Network PharmacoAnalytics for Assessing hERG-Related Cardiotoxicity.
  • Mar 26, 2025
  • Pharmaceutical research
  • Yankang Jing + 9 more

The human Ether-a-go-go Related-Gene (hERG) encodes rectifying potassium channels that play a significant role during action potential repolarization of cardiomyocytes. Blockade of the hERG channel by off-target drugs can lead to long QT syndrome, significantly increasing the risk of proarrhythmic cardiotoxicity. Traditional hERG screening methods are effort-demanding and time-consuming. Thus, it is essential to develop computational methods to utilize the existing knowledge for faster and more accurate in silico screening. Although with wide use of deep learning/machine learning algorithms, existing computational models often rely on manually defined atomic features to represent atom nodes, which may overlook critical underlying information. Thus, we want to provide a new method to learn the atom representation automatically. We first developed an automated atom embedding model using deep neural networks (DNNs), trained with 118,312 compounds collected from the ZINC database. We then trained a Graph neural networks (GNNs) model with 7909 ChEMBL compounds as the classifying part. The integration of our atom embedding model and GNN models formed a classifier that could effectively distinguish between hERG inhibitors and non-inhibitors. Our atom embedding model achieved 0.93 accuracy in representing structures. Our best GNN model achieved an accuracy of 0.84 and outcompeted traditional machine-learning models, as well as published AI-driven models, in external testing. These results highlight the potential of our automated atom embedding model as a standard for generating robust molecular representations. Its integration with advanced GNN algorithms offers promising assistance for screening hERG inhibitors and accelerating drug discovery and repurposing.

  • Research Article
  • 10.3390/tomography11020014
Graph Neural Network Learning on the Pediatric Structural Connectome.
  • Jan 29, 2025
  • Tomography (Ann Arbor, Mich.)
  • Anand Srinivasan + 6 more

Sex classification is a major benchmark of previous work in learning on the structural connectome, a naturally occurring brain graph that has proven useful for studying cognitive function and impairment. While graph neural networks (GNNs), specifically graph convolutional networks (GCNs), have gained popularity lately for their effectiveness in learning on graph data, achieving strong performance in adult sex classification tasks, their application to pediatric populations remains unexplored. We seek to characterize the capacity for GNN models to learn connectomic patterns on pediatric data through an exploration of training techniques and architectural design choices. Two datasets comprising an adult BRIGHT dataset (N = 147 Hodgkin's lymphoma survivors and N = 162 age similar controls) and a pediatric Human Connectome Project in Development (HCP-D) dataset (N = 135 healthy subjects) were utilized. Two GNN models (GCN simple and GCN residual), a deep neural network (multi-layer perceptron), and two standard machine learning models (random forest and support vector machine) were trained. Architecture exploration experiments were conducted to evaluate the impact of network depth, pooling techniques, and skip connections on the ability of GNN models to capture connectomic patterns. Models were assessed across a range of metrics including accuracy, AUC score, and adversarial robustness. GNNs outperformed other models across both populations. Notably, adult GNN models achieved 85.1% accuracy in sex classification on unseen adult participants, consistent with prior studies. The extension of the adult models to the pediatric dataset and training on the smaller pediatric dataset were sub-optimal in their performance. Using adult data to augment pediatric models, the best GNN achieved comparable accuracy across unseen pediatric (83.0%) and adult (81.3%) participants. Adversarial sensitivity experiments showed that the simple GCN remained the most robust to perturbations, followed by the multi-layer perceptron and the residual GCN. These findings underscore the potential of GNNs in advancing our understanding of sex-specific neurological development and disorders and highlight the importance of data augmentation in overcoming challenges associated with small pediatric datasets. Further, they highlight relevant tradeoffs in the design landscape of connectomic GNNs. For example, while the simpler GNN model tested exhibits marginally worse accuracy and AUC scores in comparison to the more complex residual GNN, it demonstrates a higher degree of adversarial robustness.

  • Research Article
  • 10.2514/1.d0369
Graph Neural Networks with Spatiotemporal Flow Features for Aircraft Taxi-Out-Time Prediction
  • May 22, 2025
  • Journal of Air Transportation
  • Yixiang Lim + 4 more

This paper presents a framework for modeling and predicting impeded aircraft taxi-out times based on machine learning techniques. The presented framework can be integrated into departure management systems to support the pretactical/tactical planning of departure movements and the optimization of airport resources. The taxi-out time is modeled with two components: the time taken to travel from the gate to the departure queue and the time spent in the departure queue. The first component (termed the taxiing time) is mainly affected by surface traffic conditions, while the latter component (termed the queuing time) can be more accurately modeled using characteristics derived from the departure queue. To model the spatiotemporal dependencies on traffic flow, we represent the airport taxi system as a node-link model. Flow features are derived in the form of edge attributes based on route information and movement start times. Departure trajectories utilize the same node-link representation, in the form of a subgraph incorporating additional operationally available information. The taxi-out time of each trajectory is obtained by processing the subgraph using a graph neural network (GNN) with transformer layers. Predictions from the GNN model are compared against standard methodologies by the Federal Aviation Administration (FAA) and EUROCONTROL, as well as against predictions made by gradient boosting machines (GBM), a popular decision-tree-based machine learning technique. Results show that both GNN and GBM models outperform the standard FAA and EUROCONTROL methods (with the prediction errors of the former group lower by 40–60% relative to the latter), and the novel GNN model outperforms the GBM model by a considerable margin of approximately 8 s, translating to a 10% improvement in model performance of the GNN model relative to the GBM model.

Save Icon
Up Arrow
Open/Close
  • Ask R Discovery Star icon
  • Chat PDF Star icon

AI summaries and top papers from 250M+ research sources.

Search IconWhat is the difference between bacteria and viruses?
Open In New Tab Icon
Search IconWhat is the function of the immune system?
Open In New Tab Icon
Search IconCan diabetes be passed down from one generation to the next?
Open In New Tab Icon