Articles published on Hierarchical Graph
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
1074 Search results
Sort by Recency
- New
- Research Article
- 10.1186/s12864-026-12567-4
- Feb 5, 2026
- BMC genomics
- Yiyu Lin + 4 more
Protein post-translational modifications (PTMs) represent a core regulatory mechanism governing protein function and cellular fate. Their dynamic alterations profoundly influence critical biological processes. However, Existing research primarily focuses on single-PTM site prediction and remains confined by single-modality analysis. This study introduces UniGraphPTMs, the first universal PTM site prediction framework based on multimodal fusion and graph neural networks. UniGraphPTMs employs a master-slave architecture to break branch independence through multi-stage interactions. We pioneer the integration of the protein structure pre-training model Saprot with ProtT5 and ESM-C, enabling comprehensive exploration of protein sequence-structure multimodal embeddings. The master branch utilizes xLSTM and Mamba for sequence feature extraction, while the slave branch innovatively constructs a Hierarchical Graph Neural Network for multi-level structural feature extraction. To optimize cross-modal interactions, a novel Low-Rank Cross-Attention Bidirectional Gating fusion module is designed. Furthermore, by incorporating a hierarchical contrastive loss function and pioneering a dual-modality adaptive weighting mechanism, we effectively address the challenge of synergistic learning across multiple losses. Evaluated across 11 datasets encompassing 6 distinct PTM types, UniGraphPTMs outperforms all previous models, demonstrating average improvements of 3.27% in AUC, 4.31% in MCC, and 3.94% in AP. Furthermore, we conducted a proof-of-concept study on multi-PTM joint prediction.
- New
- Research Article
- 10.3389/frai.2026.1666674
- Feb 4, 2026
- Frontiers in Artificial Intelligence
- Piyush Kumar Soni + 1 more
Implicit aspect detection aims to identify aspect categories that are not explicitly mentioned in text, but existing models struggle with four persistent challenges: aspect ambiguity, where multiple latent aspects are implied by the same expression, data imbalance and sparsity of implicit cues, contextual noise and syntactic variability in unstructured user reviews, and aspect drift, where the relevance of implicit cues changes across sentences or domains. To address these issues, this paper proposes the Transformer-Enhanced Graph Aspect Analyzer (TEGAA), a unified framework that tightly integrates dynamic expert routing, semantic representation refinement, and hierarchical graph reasoning. First, a Dynamic Expert Transformer (DET) equipped with a Dynamic Adaptive Expert Engine (DAEE) mitigates syntactic complexity and contextual noise by dynamically routing tokens to specialized expert sub-networks based on contextual and syntactic–semantic cues, enabling robust feature extraction for ambiguous implicit expressions. Second, Semantic Contrastive Learning (SCL) directly addresses data imbalance and weak implicit signals by enforcing semantic coherence among contextually related samples while increasing separability from irrelevant ones, thereby improving discriminability of sparse implicit aspect cues. Third, implicit aspect ambiguity and aspect drift are handled through a Graph-Enhanced Hierarchical Aspect Detector (GE-HAD), which models word- and sentence-level dependencies via context-aware graph attention. The incorporation of Attention Sinks prevents dominant but irrelevant tokens from overshadowing subtle implicit cues, while Pyramid Pooling aggregates multi-scale contextual information to stabilize aspect inference across varying linguistic scopes. Finally, an iterative feedback loop aligns graph-level reasoning with transformer-level expert routing, enabling adaptive refinement of aspect representations. Experiments on three benchmark datasets—Mobile Reviews, SemEval14, and Sentihood—demonstrate that TEGAA consistently outperforms state-of-the-art methods, achieving F1-scores above 0.88, precision above 0.89, recall above 0.87, accuracy exceeding 89%, and AUC values above 0.89. These results confirm TEGAA’s effectiveness in resolving implicit aspect ambiguity, handling noisy and imbalanced data, and maintaining robust performance across domains.
- New
- Research Article
- 10.1016/j.eswa.2025.129295
- Feb 1, 2026
- Expert Systems with Applications
- Migyeong Yang + 3 more
“What is your MBTI?”: Predicting the personality types using hierarchical attention and graph learning
- New
- Research Article
- 10.1016/j.inffus.2025.103511
- Feb 1, 2026
- Information Fusion
- Xia Dong + 3 more
Unsupervised multi-view feature selection via attentive hierarchical bipartite graphs with optimizable graph filter
- New
- Research Article
- 10.1016/j.apenergy.2025.127209
- Feb 1, 2026
- Applied Energy
- Pengfei Zhao + 5 more
Missing data-aware robust electrical load forecasting based on hierarchical downsampling-upsampling spatiotemporal graph network
- New
- Research Article
- 10.1016/j.tre.2025.104586
- Feb 1, 2026
- Transportation Research Part E: Logistics and Transportation Review
- Chenxiang Ma + 3 more
Hierarchical graph neural network-based generalized graph partitioning for accelerated large-scale microscopic traffic parallel simulation
- New
- Research Article
- 10.3390/math14030500
- Jan 30, 2026
- Mathematics
- Bin Li + 3 more
Whether learners can correctly complete exercises is influenced by multiple factors, including their mastery of relevant knowledge concepts and the interdependencies among these concepts. To investigate how the structure of the knowledge space—particularly the complex relationships among learners, exercises, and knowledge points—affects learning outcomes, this study proposes the Hierarchical Heterogeneous Graph Knowledge Tracing model (HHGKT). A hierarchical heterogeneous graph was constructed to capture two types of interactions—“learner–knowledge concept” and “exercise–knowledge concept”—and incorporate the interdependencies among knowledge concepts into the graph structure. By leveraging this hierarchical representation, the model’s ability to characterize learners and exercises was enhanced. A hierarchical heterogeneous graph encompassing users, exercises, and knowledge concepts was built based on the ASSISTments dataset, and simulation experiments were conducted. The results indicate that the proposed structure effectively represents the complexity of the knowledge space. Incorporating knowledge concept interdependencies improves prediction accuracy by 1.79%, while the hierarchical heterogeneous graph outperforms traditional bipartite graphs by approximately 1.5 percentage points in accuracy. These findings demonstrate that the model better integrates node and relational information, offering valuable insights for knowledge space modeling and its application in educational contexts.
- New
- Research Article
- 10.1103/z57h-y3rx
- Jan 26, 2026
- Physical Review D
- Anonymous
Event reconstruction at the LHC, the task of assigning observed physics objects to their true origins, is a central challenge for precision measurements and searches. Many existing machine learning approaches address this problem but rely on a single event topology, restricting their applicability to realistic analyses where multiple signal and background processes with different structures are present. To overcome this, we present TIGER, a novel hierarchical graph network that is fundamentally topology agnostic. By incorporating only the common underlying structure of sequential two-body decays, our model can reconstruct complex events without process-specific assumptions. This flexible architecture supports multitask learning, enabling simultaneous event reconstruction and classification. TIGER thus provides a powerful and generalizable tool for physics analysis at the LHC.
- New
- Research Article
- 10.1021/acs.jafc.5c14362
- Jan 26, 2026
- Journal of agricultural and food chemistry
- Huijun Ma + 6 more
Fermented fish products are vital sources of umami peptides. In this study, a hierarchical graph attention network-based model was developed to identify candidate umami peptides. Via an integrated approach combining metagenomics, molecular docking, attention weight analysis, molecular dynamics simulations, and experimental validation, three novel umami peptides (GYSSYK, LYSDSK, and TRTKASY) were identified from the Suanyu system, a traditional fermented fish product. It was revealed that T1R1 and T1R3 could form stable complexes with these peptides involving critical residues: GLU301, ARG277, LYS328, SER384, ASP147, GLN278, and HIS71. In sensory evaluation, candidate peptides showed high umami properties with umami threshold values of 0.28 (±0.14) mg/mL. Overall, this study presents a hierarchical graph attention network-based screening methodology for the rapid screening and in-depth study of umami peptides.
- New
- Research Article
- 10.1109/tvcg.2026.3656577
- Jan 21, 2026
- IEEE transactions on visualization and computer graphics
- Chuanyang Li + 3 more
Computer-Aided Design (CAD) sketches, composed of geometric primitives and constraints, are fundamental to CAD models and play a critical role in industrial design and manufacturing. Leveraging artificial intelligence to convert hand-drawn sketches and rendered images into CAD sketches has the potential to streamline and accelerate the design process. Existing approaches predominantly focus on separately learning primitives and constraints, often employing two-stage methods or learnable tokens to model these elements independently. However, such methods fail to fully exploit the intrinsic relationships between primitives and constraints. In this paper, we propose LuBan, a lightweight, end-to-end model for CAD sketch generation that eliminates the need for separate constraint models or tokens. LuBan leverages the DEtection TRansformer (DETR) architecture for primitive modeling and distinguishes between parametric and non-parametric features. By deriving sub-primitive features from the intermediate outputs of the primitive model, LuBan facilitates constraint prediction, effectively capturing the inherent relationships between primitives and constraints. This enables the generation of CAD sketches as hierarchical graphs. Qualitative and quantitative experiments on both precise and hand-drawn renderings demonstrate that LuBan achieves state-of-the-art performance. Ablation studies further confirm its superiority over independently trained primitive models, validating its effectiveness. Additionally, LuBan embodies the principle of "what you draw is what you get," offering significant enhancements to the design process for designers.
- New
- Research Article
- 10.1186/s13638-025-02562-w
- Jan 20, 2026
- Journal on Wireless Communications and Networking
- G Akilandeswary + 1 more
Channel-aware hierarchical graph network for optimizing sliding window RLNC in challenging wireless environments
- New
- Research Article
- 10.1049/cit2.70096
- Jan 19, 2026
- CAAI Transactions on Intelligence Technology
- Fenglin Cen + 4 more
ABSTRACT Graph contrastive learning (GCL) has emerged as a dominant paradigm for self‐supervised representation learning for attributed graph data. However, existing GCL methods heavily rely on empirical graph data augmentation, which may distort intrinsic graph semantics and produce poor generalisation without carefully chosen or designed augmentation techniques. Furthermore, most GCL approaches focus on same‐granularity contrastive learning (e.g., node vs. node), neglecting the hierarchical and multigranular properties inherent in real‐world networks, leading to suboptimal performance. To address these limitations, we propose HPoolGCL, a cross‐granularity GCL framework compatible with various hierarchical graph pooling methods to capture multigranularity information. Our framework eliminates the need for handcrafted augmentations, explicit negative sampling and complex multiencoder architectures by applying two novel loss functions in hierarchical graph pooling. The theoretical analysis is provided to explain the effectiveness of unified MGC and HiCR losses from three perspectives, namely, the information maximisation principle, the redundancy reduction principle and the information bottleneck principle. The experimental results demonstrate that HPoolGCL achieves state‐of‐the‐art performance across multiple downstream tasks on five benchmarks. Our codes are available at https://github.com/Heycen/HPoolGCL .
- Research Article
- 10.1093/bib/bbaf735
- Jan 7, 2026
- Briefings in Bioinformatics
- Cheng-Pei Lin + 7 more
Lung adenocarcinoma (LUAD), the most common subtype of nonsmall cell lung cancer, exhibits substantial molecular heterogeneity, complicating subtype classification, progression assessment, and treatment decision-making. Advances in high-throughput sequencing enable multi-omics analysis to reveal cancer mechanisms and biomarkers, yet the high dimensionality, heterogeneity, and interrelationships of omics layers such as transcriptome, microRNA expression, methylome, and copy number variation remain challenging to integrate through conventional methods. Most existing graph-based approaches represent patients as nodes, obscuring gene-level regulatory dynamics and limiting biological interpretability. To address this, we propose the Multi-omics Hierarchical Graph Neural Network (MoAGNN), a novel architecture that represents genes as nodes, integrates four omics, and leverages graph convolution with self-attention–based graph pooling to identify informative molecular nodes, thereby enhancing predictive performance and interpretability for LUAD subtype classification, tumor staging, and prognosis prediction. Multi-omics datasets from The Cancer Genome Atlas (TCGA) were used and results showed that MoAGNN achieved a test accuracy of 0.89 for LUAD subtype classification, outperforming conventional models (Random Forest, Support Vector Machine and Multi-Layer Perceptron) as well as state-of-the-art graph-based models MoGCN, a multi-omics integration model based on graph convolutional network, and MOGLAM, an end-to-end interpretable multi-omics integration method. Furthermore, we validated the generalizability of this framework on the GSE81089 dataset, demonstrating its potential applicability to clinically relevant risk assessment. Subsequent functional enrichment and survival analyses validated the biological relevance of the key genes identified by MoAGNN, supporting their potential roles in LUAD progression, and suggesting the broader applicability of this framework in multi-omics cancer research.
- Research Article
- 10.1371/journal.pone.0339881
- Jan 5, 2026
- PLOS One
- Aocheng Zuo + 5 more
Federated learning (FL) enables collaborative model training across distributed intelligent devices while preserving data privacy. In smart healthcare networks, medical institutions can jointly learn from distributed patient data using graph neural networks (GNNs). This approach improves diagnostic accuracy without compromising patient confidentiality. However, federated GNNs face substantial challenges. These include gradient privacy vulnerabilities, computational overhead from homomorphic encryption, and susceptibility to Byzantine attacks. This paper presents FedGraphHE, a privacy-preserving federated GNN framework for secure collaborative intelligence. Our methodology integrates three synergistic modules. First, Dynamic Adaptive Partitioned Homomorphic Encryption (DAPHE) optimizes gradient transmission. Second, Hierarchical Multi-scale Adaptive Graph Transformer (HMAGT) enables encryption-aware graph processing. Third, Federated Robust Aggregation via Homomorphic Inner Product (FRAHIP) provides Byzantine-resilient aggregation. Experimental results demonstrate FedGraphHE’s effectiveness across multiple scenarios. The framework consistently outperforms existing privacy-preserving methods on citation network benchmarks (Cora, CiteSeer, PubMed). It achieves 98.18% classification accuracy on medical imaging datasets (ISIC 2020), and reduces communication costs by approximately 25% compared to existing homomorphic encryption baselines. The framework maintains over 95% accuracy under Byzantine attacks, establishing it as an effective solution for privacy-sensitive collaborative learning applications.
- Research Article
- 10.1109/access.2026.3651796
- Jan 1, 2026
- IEEE Access
- Lin Jiaxin + 4 more
HG-MRLS: A Hierarchical Graph Meta-Reinforcement Learning Framework for Dynamic Scheduling in Heterogeneous Computing Networks
- Research Article
- 10.1007/978-3-032-04947-6_2
- Jan 1, 2026
- Medical image computing and computer-assisted intervention : MICCAI ... International Conference on Medical Image Computing and Computer-Assisted Intervention
- Tong Chen + 10 more
Alzheimer's Disease (AD) and Lewy Body Dementia (LBD) often exhibit overlapping pathologies, leading to common symptoms that make diagnosis challenging and protracted in clinical settings. While many studies achieve promising accuracy in identifying AD and LBD at earlier stages, they often focus on discrete classification rather than capturing the gradual nature of disease progression. Since dementia develops progressively, understanding the continuous trajectory of dementia is crucial, as it allows us to uncover hidden patterns in cognitive decline and provides critical insights into the underlying mechanisms of disease progression. To address this gap, we propose a novel multi-scale learning framework that leverages hierarchical anatomical features to model the continuous relationships across various neurodegenerative conditions, including Mild Cognitive Impairment, AD, and LBD. Our approach employs the proposed hierarchical graph embedding fusion technique, integrating anatomical features, cortical folding patterns, and structural connectivity at multiple scales. This integration captures both fine-grained and coarse anatomical details, enabling the identification of subtle patterns that enhance differentiation between dementia types. Additionally, our framework projects each subject onto continuous tree structures, providing intuitive visualizations of disease trajectories and offering a more interpretable way to track cognitive decline. To validate our approach, we conduct extensive experiments on our in-house dataset of 308 subjects spanning multiple groups. Our results demonstrate that the proposed tree-based model effectively represents dementia progression, achieves promising performance in intricate classification task of AD and LBD, and highlights discriminative brain regions that contribute to the differentiation between dementia types. Our code is available at https://github.com/tongchen2010/haff.
- Research Article
- 10.1016/j.media.2025.103816
- Jan 1, 2026
- Medical image analysis
- Peng Li + 2 more
Anatomical structure-guided joint spatiotemporal graph embedding framework for magnetic resonance fingerprint reconstruction.
- Research Article
- 10.1021/acs.jcim.5c02436
- Dec 31, 2025
- Journal of chemical information and modeling
- Zhijun Zhang + 3 more
Drug-target affinity (DTA) prediction is crucial in drug discovery. It enables researchers to elucidate the complex interaction mechanisms between candidate drugs and biological targets. However, current methods have limitations in capturing global structural patterns from molecular graphs, which are essential for accurate characterization of drugs and proteins. The absence of three-dimensional (3D) structural data leads to the loss of molecular structural information, which impairs model accuracy and generalizability. To resolve these issues, we propose a multimodal framework, PMHGT-DTA, to predict DTA using pretrained models and a hierarchical graph transformer (HGT). It integrates graph neural networks (GNNs) with transformers to represent both local node features and global structural information on molecular graphs. Both 3D conformation drug graphs and binding site-focused protein graphs, derived from pretrained models, are incorporated to complement sequence modality features. In addition, the cross-attention module models the interactions between drug atoms and protein amino acid residues to establish drug-target relationships and thereby enhancing the interpretability of the model. Experiments on Davis and KIBA benchmark data sets show that PMHGT-DTA outperforms baselines in both standard and real-world scenarios, demonstrating its potential to accelerate drug development.
- Research Article
- 10.14445/23488379/ijeee-v12i12p115
- Dec 30, 2025
- International Journal of Electrical and Electronics Engineering
- Manoj B Maurya + 3 more
Long-term widespread research has been undertaken on the need for strong and adaptable Maximum Power Point Tracking (MPPT) strategies for Photovoltaic (PV) systems owing to the far-reaching impact of PSCs, which seriously affects energy harvesting efficiency. Therefore, over and above substantial delays in the classic MPPT algorithms--that is, Perturb & Observe or Incremental Conductance–are largely considered classical MPPT methods, and their effectiveness becomes further limited owing to false convergence, slow adaptation, and limited generalization under dynamically changing shading patterns, resulting in sometimes low performance of such algorithms when enforced in the real-world environment. This paper proposes an integrated AI-powered MPPT framework created to tackle real-time power optimization issues owing to PSCs. The system comprises five tightly coupled modules: Contextual Hierarchical Transfer Graph Embedding (CHTGE) is implemented for transfer learning for a variety of environmental conditions by policy graphs on the premise of shading history combined with weather context. The role of Spatio-Temporal Feature Attention-based Indexing (STFAI) is to facilitate the detection of transient phenomena through the utilization of attention maps that are temporally aligned and derived from real-time multimodal sensor data. In the tertiary module, Differential Contextual Residual Optimization (DCRO) rectifies inaccuracies and achieves rapid stabilization through the application of residual corrections in a highly fluctuating environment. The output obtained using conventional Maximum Power Point Tracking (MPPT) methodologies is upgraded with multi-agent decision fusion with quantum-inspired adaptive logic (MADF-QAL). The Evolution-based Causal Disentanglement Networks (ECDN) provide fault localization and explainability through latent representation. There is an estimated improvement in the system performance by 28% in sensing the partial shading patterns, and a 33% reduction in the number of false triggers. Also, approximately 2.3 times quicker recovery from deviation in power, and 41% improvement in the decision-making process, and hence faster fault analysis. The proposed work suggests a framework of interpretable, resilient, and intelligent MPPT control under real-world operating scenarios.
- Research Article
- 10.3390/jmse14010066
- Dec 30, 2025
- Journal of Marine Science and Engineering
- Sushil Acharya + 3 more
Precise depth alignment of well logs is essential for reliable subsurface characterization, enabling accurate correlation of geological features across multiple wells. This study presents the Deep Hierarchical Graph Correlator (DHGC), a two-stage deep learning framework for scalable and automated well-log depth alignment. DHGC aligns a target log to a reference log by comparing fixed-size windows extracted from both signals. In the first stage, a one-dimensional convolutional neural network (1D CNN) trained on 177,026 triplets using triplet-margin loss learns discriminative embeddings of gammaray (GR) log windows from eight Norwegian North Sea wells. In the second stage, a feedforward scoring network evaluates embedded window pairs to estimate local similarity. Dynamic programming then computes the optimal nonlinear warping path from the resulting cost matrix. The feature extractor achieved 99.6% triplet accuracy, and the scoring network achieved 98.93% classification accuracy with an ROC-AUC of 0.9971. Evaluation on 89 unseen GR log pairs demonstrated that DHGC improves the mean Pearson correlation coefficient from 0.35 to 0.91, with successful alignment in 88 cases (98.9%). DHGC achieved an 8.2× speedup over DTW (3.16 s versus 25.83 s per log pair). While DTW achieves a higher mean correlation (0.96 versus 0.91), DHGC avoids singularity artifacts and exhibits lower variability in distance metrics than CC, suggesting improved robustness and scalability for well-log synchronization.