- Research Article
- 10.1186/s43067-025-00282-1
- Nov 10, 2025
- Journal of Electrical Systems and Information Technology
- Balaji Magar + 4 more
Abstract Network congestion remains a critical challenge in modern communication systems, affecting performance, reliability, and Quality of Service (QoS). Traditional congestion control mechanisms often rely on reactive approaches, which may lead to inefficiencies in dynamic network environments. This paper proposes a Machine Learning (ML)-based predictive framework leveraging Graph Neural Networks (GNNs) to forecast network congestion before it occurs, enabling proactive traffic management. We model the network as a dynamic graph, where nodes represent routers/switches and edges denote communication links. By incorporating spatial and temporal dependencies, our GNN-based approach predicts congestion hotspots with high accuracy. We evaluate our framework on real-world network datasets, demonstrating superior performance compared to traditional methods (e.g., TCP congestion control) and other ML models (e.g., LSTMs, CNNs). Our results show a 15–25% improvement in prediction accuracy, leading to reduced latency and packet loss in simulated and real testbeds.
- Research Article
- 10.1186/s43067-025-00283-0
- Nov 5, 2025
- Journal of Electrical Systems and Information Technology
- Rasha M Al-Makhlasawy + 2 more
Abstract The increasing demand for high-performance 5G networks has driven the adoption of Filter Bank Multicarrier (FBMC) as a superior alternative to traditional OFDM due to its enhanced spectral efficiency and reduced out-of-band emissions. However, FBMC systems face challenges in channel estimation and interference cancellation caused by non-orthogonal subcarriers. This paper proposes a novel Recurrent Neural Network (RNN)-based Joint Channel Estimation and Interference Cancellation (JCEIC) method that leverages Long Short-Term Memory (LSTM) networks to exploit temporal correlations in doubly-selective channels, enabling accurate channel estimation and effective interference mitigation with low computational complexity. Our simulations demonstrate that the proposed approach significantly reduces the Bit Error Rate (BER), outperforming conventional methods—particularly at low SNRs, where FBMC achieves a BER below 0.1 at just 5 dB—while approaching ideal channel performance. By combining optimized pilot placement with deep learning-driven interference cancellation, this work provides a robust and scalable solution for 5G and beyond, bridging the gap between theoretical advancements and practical deployment in next-generation wireless systems.
- Research Article
- 10.1186/s43067-025-00269-y
- Oct 30, 2025
- Journal of Electrical Systems and Information Technology
- Archana Patnaik + 3 more
Abstract Background Sentiment analysis is the approach used to identify the variation of emotional behavior of the software developer during the product development lifecycle. Aim Our research aims to ensure software quality by considering code complexity as a measuring parameter, which may have a positive or negative effect on the product. Methodology To check the model's performance, we have collected data from various real-time projects and predicted the amount and type of code smell. The developer's emotional score is evaluated using the SentiCR and Sentistrength tools. Result The Random Forest classifier achieved the highest accuracy of 96.3%, with a precision of 90.6%, recall of 89.2%, and F-measure of 88.4%. Long methods, large classes, comments and dead code are detected from the code snippets. Daily sentiment analysis signifies that on Monday, the developer is having positive emotions, which results in less amount of code smells. Conclusion This study concludes by quantifying the code smell detected during the sentiment analysis of developers. Sentiment analysis provides better correlation between emotion and software quality which is better than traditional approaches.
- Research Article
- 10.1186/s43067-025-00275-0
- Oct 27, 2025
- Journal of Electrical Systems and Information Technology
- Saliha Mezzoudj + 2 more
Abstract In the era of big data, organizations face critical decisions when selecting between data lakes and data warehouses to meet their analytics requirements. This article presents a comprehensive comparative analysis of these two predominant data management architectures, emphasizing their structural differences, functional capabilities, and suitability for diverse analytics workloads. Data lakes offer scalable, cost-effective storage for raw, unstructured, and semi-structured data, supporting advanced analytics and machine learning applications. In contrast, data warehouses provide optimized, schema-on-write frameworks for fast querying and reliable reporting on structured data. Through detailed examination of architectural designs, integration with big data tools including Hadoop, Spark, and Kafka, and evaluations based on performance, scalability, cost, and governance, this paper provides organizations with evidence-based guidance to align their data strategies with business objectives. Case studies from healthcare and retail sectors illustrate practical implications of each approach, while emerging trends such as lakehouse architectures, AI integration, blockchain security, edge computing, and quantum computing highlight future directions. The findings support for a hybrid data management solution that leverages the strengths of both data lakes and warehouses to enable robust, scalable, and innovative big data analytics.
- Research Article
- 10.1186/s43067-025-00281-2
- Oct 22, 2025
- Journal of Electrical Systems and Information Technology
- Festus A Omojowo
Abstract In the era of global health crises, social media has become both a mirror and amplifier of public opinion, influencing individual behaviours, policy responses, and the spread of (mis)information. Traditional monitoring techniques—such as surveys and focus groups—lack the timeliness, scalability, and granularity required for fast-moving health emergencies. This study presents a hybrid data mining framework that integrates sentiment analysis, topic modelling, and geolocation analytics to deliver a multidimensional view of pandemic-related public discourse. Using approximately 57,000 COVID-19-related tweets extracted via the Twitter API, lexicon-based sentiment analysis tools (VADER and TextBlob), Latent Dirichlet Allocation (LDA) topic modeling, and Orange Data Mining’s Document Map geolocation feature to capture public sentiment, thematic structures, and geographic patterns are employed. Results show a predominance of neutral sentiment (52.4%), with major topics including public health measures, vaccination discourse, and misinformation narratives. The COVID-19 pandemic has underscored the critical role of social media in shaping public discourse, disseminating information, and influencing public health decision-making. This study presents a hybrid data mining framework that integrates sentiment analysis, topic modeling, and geolocation analytics to provide a multidimensional understanding of pandemic-related discussions. Geolocation mapping revealed regional variations in sentiment, particularly higher vaccine skepticism in certain countries. The integrated framework demonstrates a reproducible, user-friendly, and region-aware methodology for crisis informatics, offering actionable insights for policymakers and public health agencies. The framework aligns with WHO infodemic management guidance and recent ethics recommendations, offering a practical, governance-ready model for health ministries and research institutions in low- and middle-income countries (LMICs) (WHO, in Social listening in infodemic management for public health: ethical guidance, World Health Organization, Geneva, 2025; Bhatt et al. in Public Health Rev 46:11. 10.3389/phrs.2025.00011, 2025 and Cascella et al. in Humanit Soc Sci Commun 12:76. 10.1057/s41599-025-04564-x, 2025).
- Research Article
- 10.1186/s43067-025-00280-3
- Oct 21, 2025
- Journal of Electrical Systems and Information Technology
- Bhaskar Anand + 1 more
Abstract Speed bump detection is paramount for ensuring the safe and comfortable operation of autonomous vehicles while complying with traffic regulations. Detecting speed bumps well in advance enables timely brake application, ensuring a smooth travel experience for passengers in autonomous vehicles. These vehicles rely on a range of sensors for perception, including cameras, radar, stereo vision, and light detection and ranging (LiDAR). LiDAR, in particular, stands out for its ability to generate dense point clouds accurately capturing the geometry and depth of surrounding objects, providing unparalleled detail for robust perception systems. This paper introduces a novel technique for speed bump detection leveraging LiDAR data. The method capitalizes on the variance in Z-values between road surfaces and speed bumps, offering promising insights for enhancing road safety and passenger comfort. The proposed method underwent rigorous testing using a dataset collected within the IIT Hyderabad campus and demonstrated effective speed bump detection. With this system, speed bumps could be reliably detected up to a distance of 15 meters at a rate of approximately 18 frames per second. Moreover, the method’s integration potential into autonomous vehicles promises to contribute significantly to a seamless and safe journey for passengers. The successful implementation of this technique underscores its potential to enhance autonomous driving systems, providing vehicles with advanced perception capabilities to navigate complex road environments with heightened safety and comfort. Further research and development in this area hold promise for continued advancements in autonomous vehicle technology, paving the way for a future of safer and more efficient transportation.
- Research Article
- 10.1186/s43067-025-00279-w
- Oct 20, 2025
- Journal of Electrical Systems and Information Technology
- Maxwell Antwi + 5 more
Abstract Wireless sensor networks (WSNs) are increasingly being targeted by malicious users who abuse resource scarcity to mask attack origins, rendering traditional IP traceback methods inappropriate due to excessive latency, low accuracy, and unacceptable overhead. IP traceback in this paper is formulated as a quadratic unconstrained binary optimization (QUBO) problem, and quantum annealing is used via D-Wave hardware and hybrid solvers, and is integrated into an NS3 simulation environment. Comparative study with classical packet marking and probabilistic sampling regimes indicates that the quantum-enhanced model achieves 90% traceback success rates and 5–10 percentage point false positive reduction with comparable latency and energy expenses. These results affirm that cyberphysical and IoT domains can be significantly benefited by quantum annealing for attacker localization in WSNs, resulting in practical, low-overhead security solutions.
- Research Article
- 10.1186/s43067-025-00277-y
- Oct 20, 2025
- Journal of Electrical Systems and Information Technology
- Kwame Salum Ibwe
Abstract Massive multiple-input multiple-output (MIMO) technology is a key enabler for 5G and beyond networks, particularly in the evolution toward 6G. The integration of millimeter-wave (mmWave) and terahertz (THz) frequencies introduces a hybrid communication paradigm that encompasses both near-field and far-field propagation, posing significant challenges for accurate channel estimation. Traditional hybrid-field channel estimation methods rely on idealized assumptions of uniform scatterer distributions and stationary conditions. However, real-world environments feature irregular scatterer distributions and mixed mobility scenarios, which degrade the performance of existing techniques. This paper proposes the weighted dynamic hybrid-field simultaneous orthogonal matching pursuit (WDHF-SOMP) algorithm, an extension of AHF-SOMP, to address these limitations. The proposed algorithm introduces a weighted support selection strategy to model irregular scatterer distributions and dynamic mobility, enabling more accurate hybrid-field channel estimation. The WDHF-SOMP algorithm is evaluated under diverse conditions, including high path loss environments, multipath-rich channels, and interference scenarios. Simulation results demonstrate that WDHF-SOMP achieves normalized mean square error (NMSE) gains ranging from 0.3 dB in antenna scaling scenarios to 2 dB in irregular scatterer environments at 10 dB SNR. The gain holds across diverse scenarios with severe fading and pilot contamination. The findings demonstrate the effectiveness of WDHF-SOMP in enhancing spectral efficiency and robustness in massive MIMO systems, making it a promising solution for next-generation wireless networks.
- Research Article
- 10.1186/s43067-025-00278-x
- Oct 17, 2025
- Journal of Electrical Systems and Information Technology
- Milad Rahmati + 1 more
Abstract The growing demand for ultra-high-speed electronics and communication systems has intensified the search for advanced modeling techniques to support the next generation of beyond-CMOS devices. Two-dimensional (2D) materials such as transition metal dichalcogenides (TMDs) and graphene have demonstrated exceptional electrical properties, making them strong candidates for terahertz (THz) transistors. However, accurately predicting device behavior, variability, and reliability remains challenging due to complex physical interactions at the nanoscale and the lack of robust, generalizable compact models. In this work, we propose a novel physics-informed machine learning (PIML) framework for compact modeling and variability prediction of 2D material-based THz transistors. By integrating fundamental semiconductor physics with data-driven neural network architectures, the proposed framework enhances prediction accuracy and model interpretability while maintaining computational efficiency. Extensive simulation experiments validate the framework using open-source device datasets and custom-generated synthetic data for 2D TMD transistors operating in the THz regime. Results demonstrate significant improvements over conventional empirical models in terms of prediction error, generalization across device geometries, and resilience to process-induced variability. This work bridges the gap between physics-based modeling and modern machine learning, providing a practical toolset for high-speed circuit designers. The proposed approach supports advanced design automation flows for emerging THz integrated circuits, contributing to the development of reliable, high-performance electronics for future wireless communication and computing infrastructures.
- Research Article
- 10.1186/s43067-025-00276-z
- Oct 13, 2025
- Journal of Electrical Systems and Information Technology
- Franco Osei-Wusu + 2 more
Abstract The CKKS scheme supports secure approximate arithmetic on encrypted real-valued data, but its performance suffers when input vectors are not of power-of-two length. We propose Power-of-Two CKKS (P2P-CKKS), a variant that automatically pads input vectors with zeros up to the next power of two. This padding prevents overflow and other error conditions, enabling efficient Fast Fourier Transform (FFT) operations for polynomial arithmetic. Our experiments show that P2P-CKKS maintains the same accuracy as the original data while substantially improving computational efficiency. Importantly, even when inputs are already powers of two, P2P-CKKS matches or exceeds the execution speed of standard CKKS. In our tests, P2P-CKKS achieved a 100% success rate across all examined vector sizes, demonstrating robust scalability. These results suggest that adaptive zero-padding is a straightforward but effective strategy for improving the efficiency of encrypted computation.