Enhancing Graph-Based Sentiment Analysis Models with Hill Climbing
In the current study, sentiment graphs were constructed in which the nodes represented emotion-laden words, and the edges depicted their weighted semantic associations. To improve the model, the hill climbing method was employed, which iteratively adjusted parameters to achieve increasingly higher classification accuracy. The developed system employed a combination of graph neural networks (GNNs) and hill climb-based optimisation to improve the efficiency of sentiment categorisation. The experiment's outcomes reveal that the suggested model reached a maximum accuracy of 96.95%, which is higher than traditional sentiment analysis methods and thus proves its appropriateness for emotion-aware text representation. The experimental findings confirm that GNN-based sentiment representation and hill climbing optimisation effectively leverage the intricate emotional relationships, resulting in better sentiment classification. The graphs illustrating optimisation progress and the structure of the sentiment graph further demonstrate the effectiveness of our method.
- Research Article
- 10.1038/s40494-025-01773-0
- May 15, 2025
- npj Heritage Science
Minnan nursery rhymes (MNRs), an integral part of Minnan intangible cultural heritage (ICH), are shared by southern Fujian, Taiwan, and overseas Chinese communities. Preserving and analyzing MNRs, especially their emotional evolution over time, is crucial but challenging due to shifting cultural contexts. Traditional sentiment analysis methods often overlook intrinsic relationships among nursery rhymes. To address this, we construct an MNR network using textual feature vectors and cosine similarity thresholds, enabling the exploration of structural and emotional patterns. Inspired by graph neural networks, we propose a Joint-GraphSAGE model for sentiment classification, which effectively captures complex relationships and semantic nuances. Comparative experiments with classical machine learning and deep learning models demonstrate that the Joint-GraphSAGE model significantly outperforms baselines in sentiment classification tasks for both traditional and modern MNRs. This approach not only enhances sentiment analysis accuracy for MNRs, but also offers new perspectives for studying ICH cultural connections.
- Research Article
1
- 10.48175/ijarsct-22980
- Jan 13, 2025
- International Journal of Advanced Research in Science, Communication and Technology
Social network analysis (SNA) is an important approach for understanding complex linkages and interactions between entities. Traditional approaches frequently fail to capture the complexities of network data due to its non-Euclidean character. Graph Neural Networks (GNNs) offer an innovative approach to data analysis by modelling node, edge, and graph features using graph structures and neural network topologies. This study investigates the use of GNNs in social network analysis, focusing on problems such as community recognition, impact maximization, link prediction, and sentiment analysis. Our analysis of cutting-edge GNN models shows how they effectively capture and utilize topological and contextual information from social networks
- Research Article
1
- 10.3390/math13060997
- Mar 18, 2025
- Mathematics
Sentiment analysis in Chinese microblogs is challenged by complex syntactic structures and fine-grained sentiment shifts. To address these challenges, a Contextually Enriched Graph Neural Network (CE-GNN) is proposed, integrating self-supervised learning, context-aware sentiment embeddings, and Graph Neural Networks (GNNs) to enhance sentiment classification. First, CE-GNN is pre-trained on a large corpus of unlabeled text through self-supervised learning, where Masked Language Modeling (MLM) and Next Sentence Prediction (NSP) are leveraged to obtain contextualized embeddings. These embeddings are then refined through a context-aware sentiment embedding layer, which is dynamically adjusted based on the surrounding text to improve sentiment sensitivity. Next, syntactic dependencies are captured by Graph Neural Networks (GNNs), where words are represented as nodes and syntactic relationships are denoted as edges. Through this graph-based structure, complex sentence structures, particularly in Chinese, can be interpreted more effectively. Finally, the model is fine-tuned on a labeled dataset, achieving state-of-the-art performance in sentiment classification. Experimental results demonstrate that CE-GNN achieves superior accuracy, with a Macro F-measure of 80.21% and a Micro F-measure of 82.93%. Ablation studies further confirm that each module contributes significantly to the overall performance.
- Research Article
20
- 10.1109/tkde.2021.3112746
- Jan 1, 2021
- IEEE Transactions on Knowledge and Data Engineering
Graph Neural Networks (GNNs) are emerging machine learning models on graphs. Although sufficiently deep GNNs are shown theoretically capable of fully preserving graph structures, most existing GNN models in practice are shallow and essentially feature-centric. We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well. To overcome this fundamental challenge, we propose Eigen-GNN, a simple yet effective and general plug-in module to boost GNNs ability in preserving graph structures. Specifically, we integrate the eigenspace of graph structures with GNNs by treating GNNs as a type of dimensionality reduction and expanding the initial dimensionality reduction bases. Without needing to increase depths, Eigen-GNN possesses more flexibilities in handling both feature-driven and structure-driven tasks since the initial bases contain both node features and graph structures. We present extensive experimental results to demonstrate the effectiveness of Eigen-GNN for tasks including node classification, link prediction, and graph isomorphism tests.
- Conference Article
- 10.1109/ihmsc52134.2021.00044
- Aug 1, 2021
Graph Neural Networks (GNNs) is an important branch of deep learning in graph structure. As a model that can reveal deep topological information, GNNs has been widely used in various learning tasks, including physical system, protein interface prediction, disease classification, learning molecular fingerprints and so on. However, in most of these tasks, the graph data we are working with may be noisy and may contain spurious edges. That is to say, there is a lot of uncertainty associated with the underlying graph structure. The method of modeling uncertainty is to use Bayesian framework, in which graph is regarded as random variable. Introducing Bayesian framework into graph-based model, especially for semi-supervised node classification, has been shown that it can produce higher classification accuracy. In this paper, some GNNs models and Bayesian neural networks are introduced to better understand how GNNs are combined with Bayesian. Then, the development of Bayesian graph neural networks(BGNNs) in recent years is summarized, and its application in the field of engineering technology is demonstrated. Finally, the future development of BGNNs is prospected and the full text is summarized.
- Conference Article
1
- 10.1109/icssit55814.2023.10061017
- Jan 23, 2023
In recent times, S entimentalAnalysis (SA) acquires important attention in the process of decision making, primarily implied for the classification and extraction of the sentiments exist in the online reviews posted by the user. SA could be assumed as a sentiment classification (SC)issue where the online reviews experiences classification into negative and positive polarities based on the words available in the online reviews. This study focuses on the design of Robust Extreme Learning Machine Based Sentiment Analysis and Classification (RLM-SAC) model. The presented REIM-SAC model majorly aims to determine the nature of sentiments exist in the input data. Primarily, the input data is thoroughly pre-processed to get rid of unwanted data, which helps in enhancing the classification accuracy and minimizing the computational complexity. In addition, the presented REIM-SAC model applies ELM model to allocate proper class labels to it. To adjust the parameters of the ELM model, comprehensive learning particle swarm optimization (CLPSO) technique was used. The performance assessment of the RELM-SAC model is experimented with using benchmark database and the outcomes are scrutinized under numerous aspects. The simulation outcomes pointed out that the RELM-SAC method has obtained improved outcomes than other models.
- Research Article
- 10.62762/tetai.2025.740330
- Feb 18, 2026
- ICCK Transactions on Emerging Topics in Artificial Intelligence
Graph Neural Networks (GNNs) have become increasingly prominent in Natural Language Processing (NLP) due to their ability to model intricate relationships and contextual connections between texts. Unlike traditional NLP methods, which typically process text linearly, GNNs utilize graph structures to represent the complex relationships between texts more effectively. This capability has led to significant advancements in various NLP applications, such as social media interaction analysis, sentiment analysis, text classification, and information extraction. Notably, GNNs excel in scenarios with limited labeled data, often outperforming traditional approaches by providing deeper, context-aware solutions. Their versatility in handling different data types has made GNNs a popular choice in NLP research. In this study, we thoroughly explored the application of GNNs across various NLP tasks, demonstrating their advantages in understanding and representing text relationships. We also examined how GNNs address traditional NLP challenges, showcasing their potential to deliver more meaningful and accurate results. Our research underscores the value of GNNs as a potent tool in NLP and suggests future research directions to enhance their applicability and effectiveness further.
- Conference Article
25
- 10.1109/dac18074.2021.9586181
- Dec 5, 2021
In recent years, Graph Neural Networks (GNNs) appear to be state-of-the-art algorithms for analyzing non-euclidean graph data. By applying deep-learning to extract high-level representations from graph structures, GNNs achieve extraordinary accuracy and great generalization ability in various tasks. However, with the ever-increasing graph sizes, more and more complicated GNN layers, and higher feature dimensions, the computational complexity of GNNs grows exponentially. How to inference GNNs in real time has become a challenging problem, especially for some resource-limited edge-computing platforms.To tackle this challenge, we propose BlockGNN, a software-hardware co-design approach to realize efficient GNN acceleration. At the algorithm level, we propose to leverage block-circulant weight matrices to greatly reduce the complexity of various GNN models. At the hardware design level, we propose a pipelined CirCore architecture, which supports efficient block-circulant matrices computation. Basing on CirCore, we present a novel BlockGNN accelerator to compute various GNNs with low latency. Moreover, to determine the optimal configurations for diverse deployed tasks, we also introduce a performance and resource model that helps choose the optimal hardware parameters automatically. Comprehensive experiments on the ZC706 FPGA platform demonstrate that on various GNN tasks, BlockGNN achieves up to 8.3× speedup compared to the baseline HyGCN architecture and 111.9× energy reduction compared to the Intel Xeon CPU platform.
- Research Article
16
- 10.1109/tkde.2022.3197554
- Jan 1, 2022
- IEEE Transactions on Knowledge and Data Engineering
Graph neural networks (GNNs) have shown great power in modeling graph structured data. However, similar to other machine learning models, GNNs may make biased predictions w.r.t protected sensitive attributes, e.g., skin color and gender. This is because machine learning algorithms including GNNs are trained to reflect the distribution of the training data which often contains historical bias towards sensitive attributes. In addition, we empirically show that the discrimination in GNNs can be magnified by graph structures and the message-passing mechanism of GNNs. As a result, the applications of GNNs in high-stake domains such as crime rate prediction would be largely limited. Though extensive studies of fair classification have been conducted on independently and identically distributed (i.i.d) data, methods to address the problem of discrimination on non-i.i.d data are rather limited. Generally, learning fair models require abundant sensitive attributes to regularize the model. However, for many graphs such as social networks, users are reluctant to share sensitive attributes. Thus, only limited sensitive attributes are available for fair GNN training in practice. Moreover, directly collecting and applying the sensitive attributes in fair model training may cause privacy issues, because the sensitive information can be leaked in data breach or attacks on the trained model. Therefore, we study a novel and important problem of learning fair GNNs with limited number of private sensitive attributes, i.e., sensitive attributes that are processed with a privacy-preserving mechanism. In an attempt to address these problems, FairGNN is proposed to eliminate the bias of GNNs whilst maintaining high node classification accuracy by leveraging graph structures and limited sensitive information. To further preserve the privacy, private sensitive attributes with privacy guarantee are obtained by injecting noise based on local differential privacy. And We further extend FairGNN to NT-FairGNN to handle the limited and private sensitive attributes to simultaneously achieve fairness and preserve privacy. Theoretical analysis and extensive experiments on real-world datasets demonstrate the effectiveness of FairGNN and NT-FairGNN in achieving fair and high-accurate classification.
- Research Article
23
- 10.1016/j.neucom.2022.01.064
- Jan 22, 2022
- Neurocomputing
A unified structure learning framework for graph attention networks
- Research Article
91
- 10.1016/j.asoc.2023.110040
- Jan 20, 2023
- Applied Soft Computing
EGNN: Graph structure learning based on evolutionary computation helps more in graph neural networks
- Research Article
3
- 10.1007/s41109-021-00423-1
- Oct 26, 2021
- Applied Network Science
Graph Neural Networks (GNNs) are effective in many applications. Still, there is a limited understanding of the effect of common graph structures on the learning process of GNNs. To fill this gap, we study the impact of community structure and homophily on the performance of GNNs in semi-supervised node classification on graphs. Our methodology consists of systematically manipulating the structure of eight datasets, and measuring the performance of GNNs on the original graphs and the change in performance in the presence and the absence of community structure and/or homophily. Our results show the major impact of both homophily and communities on the classification accuracy of GNNs, and provide insights on their interplay. In particular, by analyzing community structure and its correlation with node labels, we are able to make informed predictions on the suitability of GNNs for classification on a given graph. Using an information-theoretic metric for community-label correlation, we devise a guideline for model selection based on graph structure. With our work, we provide insights on the abilities of GNNs and the impact of common network phenomena on their performance. Our work improves model selection for node classification in semi-supervised settings.
- Research Article
1
- 10.14778/3705829.3705846
- Oct 1, 2024
- Proceedings of the VLDB Endowment
Graph neural networks (GNNs) are a type of neural network capable of learning on graph-structured data. However, training GNNs on large-scale graphs is challenging due to iterative aggregations of high-dimensional features from neighboring vertices within sparse graph structures combined with neural network operations. The sparsity of graphs frequently results in suboptimal memory access patterns and longer training time. Graph reordering is an optimization strategy aiming to improve the graph data layout. It has shown to be effective to speed up graph analytics workloads, but its effect on the performance of GNN training has not been investigated yet. The generalization of reordering to GNN performance is nontrivial, as multiple aspects must be considered: GNN hyper-parameters such as the number of layers, the number of hidden dimensions, and the feature size used in the GNN model, neural network operations, large intermediate vertex states, and GPU acceleration. In our work, we close this gap by performing an empirical evaluation of 12 reordering strategies in two state-of-the-art GNN systems, PyTorch Geometric and Deep Graph Library. Our results show that graph reordering is effective in reducing training time for CPU- and GPU-based training, respectively. Further, we find that GNN hyper-parameters influence the effectiveness of reordering, that reordering metrics play an important role in selecting a reordering strategy, that lightweight reordering performs better for GPU-based than for CPU-based training, and that invested reordering time can in many cases be amortized.
- Research Article
44
- 10.3390/app10155275
- Jul 30, 2020
- Applied Sciences
In recent years, the number of review texts on online travel review sites has increased dramatically, which has provided a novel source of data for travel research. Sentiment analysis is a process that can extract tourists’ sentiments regarding travel destinations from online travel review texts. The results of sentiment analysis form an important basis for tourism decision making. Thus far, there has been minimal concern as to how sentiment analysis methods can be effectively applied to improve the effect of sentiment analysis. However, online travel review texts are largely short texts characterized by uneven sentiment distribution, which makes it difficult to obtain accurate sentiment analysis results. Accordingly, in order to improve the sentiment classification accuracy of online travel review texts, this study transformed sentiment analysis into a multi-classification problem based on machine learning methods, and further designed a keyword semantic expansion method based on a knowledge graph. Our proposed method extracts keywords from online travel review texts and obtains the concept list of keywords through Microsoft Knowledge Graph. This list is then added to the review text to facilitate the construction of semantically expanded classification data. Our proposed method increases the number of classification features used for short text by employing the huge corpus of information associated with the knowledge graph. In addition, this article introduces online travel review text preprocessing, keyword extraction, text representation, sampling, establishment classification labeling, and the selection and application of machine learning-based sentiment classification methods in order to build an effective sentiment classification model for online travel review text. Experiments were implemented and evaluated based on the English review texts of four famous attractions in four countries on the TripAdvisor website. Our experimental results demonstrate that the method proposed in this paper can be used to effectively improve the accuracy of the sentiment classification of online travel review texts. Our research attempts to emphasize and improve the methodological relevance and applicability of sentiment analysis for future travel research.
- Research Article
- 10.20965/jaciii.2025.p0868
- Jul 20, 2025
- Journal of Advanced Computational Intelligence and Intelligent Informatics
Deep learning has achieved significant advancements in natural language processing. However, applying these methods to languages with complex morphological and syntactic structures—such as Russian—remains challenging. To address these challenges, this paper presents an optimized sentiment analysis model, GNN–BERT–AE, specifically designed for the Russian language. The model integrates graph neural networks (GNNs) with the contextualized embeddings of bidirectional encoder representations from transformers (BERT), enabling it to capture both syntactic dependencies and nuanced semantic information inherent in the Russian language. Whereas GNN excels in modeling the intricate word dependencies within the language, the contextualized representations of BERT provide a deep understanding of the text, improving the ability of the model to accurately interpret sentiments. The model further incorporates traditional feature extraction techniques—bag of words and term frequency–inverse document frequency—to preprocess text and emphasize critical features for sentiment analysis. To further enhance these features, a self-encoder clustering algorithm is employed, enabling the identification of latent patterns and improving the sensitivity of the model to subtle sentiment variations. The final phase of the model involves sentiment classification, categorizing emotions based on the enriched feature set. Experimental results showed that the GNN–BERT–AE model outperformed existing models—CNN–Transformer, RNN–LSTM–GRU, and Text–BiLSTM–CNN—on Russian social media datasets, achieving 1.25% to 3.1% accuracy improvements. These results highlight the robustness of the model and its significant potential for advancing sentiment analysis in the Russian language, particularly in handling complex linguistic features.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.