Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • Research Article
  • 10.21917/ijct.2025.0556
ENHANCED SECURE FEDERATED LEARNING FRAMEWORK FOR RELIABLE HEALTHCARE WIRELESS SENSOR NETWORKS
  • Dec 1, 2025
  • ICTACT Journal on Communication Technology
  • Thomas Abraham J V + 1 more

The rapid integration of wireless sensor networks in healthcare monitoring has created strong opportunities for continuous patient assessment. However, the distributed nature of these networks has exposed sensitive medical data to significant privacy and security risks. Traditional centralized learning models have struggled to protect patient information, particularly when the data has/have been transmitted across heterogeneous devices. This study addressed these concerns by evaluating an enhanced secure federated learning framework that has/have reduced communication overhead and strengthened protection against model-level threats. The problem emerged when conventional federated models failed to defend aggregated parameters against inference attacks that targeted the intermediates shared during training. To overcome this limitation, the proposed system integrated authenticated encryption, differential privacy, and a lightweight blockchain layer that/which supported tamper-proof logging. The method followed a three-stage design that/which included secure client selection, privacy-preserved gradient update, and decentralized model validation. The wireless nodes operated with an adaptive update schedule that/which minimized energy use while maintaining stable model convergence. The evaluation demonstrates that the proposed secure federated learning framework achieves a classification accuracy of 96.0%, outperforming Encrypted Aggregation FL (93.0%), Differential Privacy FL (90.2%), and Blockchain-Assisted FL (94.2%). The communication cost has/have been reduced to 17.2 MB from 22.0 MB, 18.1 MB, and 23.5 MB, respectively. Energy consumption per node is lowered to 1.95 J, compared to 2.45 J, 2.68 J, and 2.63 J in the existing methods. The system achieves a privacy preservation score of 0.94, higher than 0.75– 0.87 in baseline approaches, and maintains strong model robustness at 94.2% under adversarial conditions. These results validate that the proposed framework provides reliable, energy-efficient, and secure federated learning suitable for real-time healthcare monitoring applications.

  • Research Article
  • 10.21917/ijct.2025.0553
ADAPTIVE EDGE-ASSISTED FRAMEWORK FOR LOW-LATENCY EMERGENCY COMMUNICATIONS IN DISASTER ZONES
  • Dec 1, 2025
  • ICTACT Journal on Communication Technology
  • Manjula Pattnaik + 1 more

The rapid collapse of conventional communication networks during large-scale disasters has often created severe delays in emergency response. Communities have faced life-threatening conditions when the damaged infrastructure restricted timely coordination. This study addressed that challenge by designing an adaptive edge-assisted framework that reduced end-to-end latency during crisis operations. The background of this work focused on how earlier systems relied on centralized cloud servers, which introduced long routing paths and unstable links under stress. Such limitations have often lowered reliability when first responders needed immediate access to situational information. The problem became more critical when dynamic environmental changes forced devices to operate under intermittent connectivity. These disruptions have often prevented smooth message flow across the network. To overcome this gap, the proposed method introduced an integrated architecture that placed intelligence at the edge nodes. The system used a lightweight scheduling module that coordinated the data flow based on link quality and congestion. A context-aware routing unit handled real-time traffic while maintaining continuity for life-saving alerts. The design also used a local caching layer that stored relevant updates during temporary link failures. The evaluation demonstrates that the framework achieves end-to-end delay reduction to 55–67 ms, compared to 105–180 ms for existing methods. The packet delivery ratio reaches 96.5–98.5%, surpassing UAV- assisted relay (85–92%), delay-tolerant networking (75–80%), and fog- based architecture (90–94%). The throughput improves to 9.1–10.2 Mbps, while caching efficiency reaches 92–95%, indicating robust message continuity during temporary link failures. Additionally, energy consumption is reduced to 9.5–10.5 J, reflecting optimized edge processing. These results validate that the framework significantly enhances responsiveness, reliability, and energy efficiency, offering a practical solution for disaster-affected areas.

  • Research Article
  • 10.21917/ijct.2025.0552
AI-ENHANCED CHANNEL ESTIMATION TECHNIQUES FOR SCALABLE MASSIVE IOT SMART-CITY NETWORKS
  • Dec 1, 2025
  • ICTACT Journal on Communication Technology
  • Sudhir Reddy N + 1 more

The rapid growth of smart-city infrastructures has created an environment in which massive IoT deployments operated across dense, heterogeneous wireless networks. As device density increased, the communication channels have often experienced severe interference, unpredictable fading, and high noise levels that collectively limited estimation accuracy. Traditional estimation techniques relied on linear models that struggled to track the dynamic channel conditions of large-scale IoT environments. This scenario established the core problem: existing estimators have not maintained reliable performance when network density surged or when devices transmitted sporadic traffic. To address this, the study proposed an AI-driven channel estimation framework that has leveraged deep learning to extract latent channel characteristics from limited pilot signals. The method incorporated a hybrid convolutional–recurrent design that captured spatial variations while it tracked temporal fluctuations of each channel. The system also included an adaptive refinement block that has improved estimation accuracy when pilot contamination occurred. The architecture was trained with synthetic and real-world datasets that have represented typical smart-city IoT deployments, including traffic sensors, utility meters, and environmental monitoring nodes that operated under mixed mobility patterns. The evaluation demonstrates that the proposed framework consistently outperforms conventional estimators. The method achieves a 6.2% NMSE at 100 epochs compared with 10.4% for MMSE and 8.2% for CS, and reduces MAE to 4.0% compared with 7.2% for MMSE. Spectral efficiency increases to 6.9 bps/Hz, while pilot overhead is reduced by 25%, outperforming baseline methods. Computational time remains practical at 3.6 ms per batch, confirming that the AI-assisted estimation effectively enhances reliability and efficiency in large IoT smart-city deployments.

  • Research Article
  • 10.21917/ijct.2025.0536
DEEP LEARNING-ENHANCED COMPRESSIVE SENSING FOR EFFICIENT REAL TIME IOT SIGNAL RECONSTRUCTION
  • Sep 1, 2025
  • ICTACT Journal on Communication Technology
  • Sangeetha K + 1 more

The rapid expansion of the Internet of Things (IoT) has created massive volumes of sensor-generated data that require efficient transmission and real-time reconstruction. Traditional signal processing approaches often fall short in balancing compression efficiency, reconstruction accuracy, and low latency. Compressive Sensing (CS) has emerged as a promising technique to address these challenges, but its performance in real-world IoT environments is limited by high computational costs and reconstruction delays. To overcome these barriers, this work proposes a deep learning-assisted compressive sensing framework that integrates neural networks with classical CS methods for efficient signal recovery. The approach leverages a convolutional autoencoder to learn robust feature representations from sparse measurements, enabling faster and more accurate reconstruction of IoT signals. Experiments conducted on benchmark IoT datasets demonstrate significant improvements in both recovery accuracy and speed compared to conventional CS algorithms. The proposed framework achieves higher peak signal-to-noise ratio (PSNR) and reduced mean squared error (MSE), while also lowering reconstruction latency, making it well-suited for real-time IoT applications such as smart healthcare, environmental monitoring, and industrial automation. Thus, this study highlights the synergy between deep learning and compressive sensing, offering a scalable and practical solution to meet the growing demands of IoT signal processing.

  • Research Article
  • 10.21917/ijct.2025.0533
SIMULATION AND ANALYSIS OF A PASSIVE UPLINK-TRIGGERED LTE JAMMER USING PLL FREQUENCY CONTROL
  • Sep 1, 2025
  • ICTACT Journal on Communication Technology
  • Wilson Tchounna Tsabgou + 1 more

This study proposes the design and simulation of a low-power analog jammer that selectively targets LTE downlink signals based on real time uplink detection. The system architecture integrates a field strength detection unit, a PLL-controlled frequency sweeper, and a jamming signal generator using Zener-based noise injection and RF mixing via SA612A ICs. Simulations conducted in Proteus and MATLAB/Simulink validated the functional blocks, demonstrating accurate uplink detection, stable frequency synthesis, and effective jamming performance. Key results include spectral spreading between 21–33?dBJE, severe signal distortion, and bit-error rates exceeding 80% under interference conditions. While manual tuning and regulatory limitations constrain immediate deployment, the proposed solution offers a scalable foundation for controlled civilian use. The findings support future development of digitally enhanced, multi-band jamming systems tailored for educational or security-sensitive settings.

  • Research Article
  • 10.21917/ijct.2025.0546
AN UNSUPERVISED APPROACH FOR DETECTION OF ENCRYPTED IOT ANOMALIES USING VARIATIONAL AUTOENCODER AND ISOLATION FOREST TECHNIQUES
  • Sep 1, 2025
  • ICTACT Journal on Communication Technology
  • Sukanya N + 1 more

Traditional network detection methods are no longer effective in detecting breaks due to the rapid growth of encrypted IoT traffic. This article proposes an innovative unsupervised anomaly detection technique that uses flow-based data from encrypted network traffic and a hybrid model of Variational Autoencoder (VAE) and Isolation Forest. The proposed approach is thoroughly tested on the CICIoT2023 dataset, which provides a wide range of encrypted IoT traffic scenarios and is trained only on benign traffic that simulates real new attack situations. Our approach aims to apply generalization across many dangers, unlike previous research that usually concentrate on detecting a particular attack type. Its wide application is demonstrated by its ability to accurately identify four main attack categories: DDoS HTTP Flood, Browser Hijacking, Backdoor Malware, and SQL Injection. With an F1-score of 0.55 and an AUC of 0.8947 for anomaly detection, the hybrid VAE + Isolation Forest model exceeds the standard models used by the prior research, according to the results. The approach is flexible, trustworthy, and totally unsupervised for use in real-time encrypted applications. The following will be expanded in further research to include session-based adaptive learning and multi-class attack classification.

  • Research Article
  • 10.21917/ijct.2025.0535
ENERGY-AWARE DATA AGGREGATION IN WIRELESS SENSOR NETWORKS THROUGH HYBRID DEEP REINFORCEMENT LEARNING
  • Sep 1, 2025
  • ICTACT Journal on Communication Technology
  • Ramdas D Gore + 1 more

Wireless Sensor Networks (WSNs) play a critical role in environmental monitoring, healthcare, disaster management, and smart infrastructure. However, the limited energy resources of sensor nodes remain a pressing challenge, particularly in data aggregation and transmission processes, where redundancy and inefficient routing can significantly shorten network lifetime. To address this problem, we propose a Hybrid Deep Reinforcement Learning (HDRL) framework that optimizes data aggregation while balancing energy consumption and communication overhead. The method integrates the decision making capability of reinforcement learning with the representational power of deep neural networks, enabling adaptive node selection and dynamic routing based on real-time energy and network states. The proposed HDRL model employs a dual-agent mechanism: the first agent focuses on cluster head selection for balanced energy distribution, while the second agent optimizes multi-hop routing paths to minimize redundant transmissions. A reward function is designed to jointly consider residual energy, data latency, and transmission reliability. Simulation results show that the HDRL-based approach outperforms traditional clustering and reinforcement learning methods in terms of network lifetime extension, reduced packet loss, and improved throughput. Notably, the proposed method achieves up to 30% improvement in energy efficiency and 25% reduction in end-to end delay, making it highly suitable for large-scale, real-time WSN applications.

  • Research Article
  • 10.21917/ijct.2025.0543
HYBRID HAAR CASCADE AND CNN+EDL FRAMEWORK FOR ROBUST FACIAL EXPRESSION RECOGNITION IN HUMAN–COMPUTER INTERACTION
  • Sep 1, 2025
  • ICTACT Journal on Communication Technology
  • Shanthakumari R + 3 more

Facial Expression Recognition (FER) has emerged as a crucial component in Human–Computer Interaction (HCI), enabling applications in healthcare, education, surveillance, and social robotics. Despite considerable progress, achieving robust FER in unconstrained environments remains challenging due to variations in illumination, pose, occlusion, and intra-class similarity. Conventional approaches relying solely on handcrafted features or deep learning often suffer from redundancy in extracted features, sensitivity to noise, and sub optimal performance on subtle emotions such as fear and disgust. These limitations hinder their deployment in real-world, dynamic HCI scenarios where reliability and generalization are essential. This work proposes a hybrid FER framework that integrates Haar Cascade-based feature localization with a Convolutional Neural Network augmented by Evidential Deep Learning (CNN+EDL). Preprocessing stages include image resizing, grayscale conversion, histogram equalization, Gaussian smoothing, face alignment, and normalization. Haar Cascade is employed to extract primary Regions of Interest (eyes, nose, mouth), reducing computational overhead and focusing learning on salient features. These features are then classified using CNN+EDL, which leverages uncertainty modeling and adaptive optimization to improve classification robustness. Experimental evaluations conducted on the FER2013 dataset demonstrate that the proposed model consistently outperforms conventional CNN, ResNet-34, MobileNet V1, EJH-CNN-BiLSTM, and DCNN-Autoencoder baselines. At 100 epochs, CNN+EDL achieves the highest accuracy (97.1%), precision (95.6%), recall (94.5%), and F1-score (94.9%), surpassing the closest baseline by 3–5%. Emotion-wise performance is also superior, with accuracy values of 96.2% (Happy), 94.1% (Sad), 91.3% (Disgust), 90.2% (Fear), 93.5% (Angry), 95.6% (Surprise), and 94.4% (Neutral). These results highlight the system’s generalization ability, particularly for complex emotions.

  • Research Article
  • 10.21917/ijct.2025.0539
AI POWER ALLOCATION AND USER FAIRNESS IN 6G NOMA NETWORKS USING MACHINE LEARNING
  • Sep 1, 2025
  • ICTACT Journal on Communication Technology
  • Vijayaraghavan N + 1 more

The evolution toward sixth-generation (6G) communication systems demands advanced multiple access techniques capable of meeting stringent requirements for massive connectivity, ultra-low latency, and high spectral efficiency. Non-Orthogonal Multiple Access (NOMA) has emerged as a promising candidate, enabling simultaneous access for multiple users by sharing the same frequency resources with different power levels. However, efficient power allocation and ensuring fairness among users remain critical challenges. Traditional optimization based methods often face high computational complexity and limited adaptability to dynamic environments, making them less suitable for real-time applications. This study introduces an AI-driven framework for power allocation and fairness optimization in NOMA-enabled 6G networks. The proposed method employs machine learning models to predict optimal power allocation strategies by learning from dynamic user distributions, channel state information, and traffic demands. Unlike conventional schemes, the AI model adaptively balances system throughput and user fairness, reducing the risk of resource monopolization by users with favorable channel conditions. Experimental evaluations demonstrate that the proposed framework achieves up to 18% improvement in spectral efficiency and 22% better fairness index compared to conventional water-filling and heuristic based allocation methods. Additionally, the machine learning approach reduces computation time by nearly 30%, making it viable for real-time deployment in ultra-dense 6G environments. These results highlight the potential of integrating AI with NOMA to enhance the robustness and intelligence of next-generation communication systems.

  • Research Article
  • 10.21917/ijct.2025.0537
POST-QUANTUM CRYPTOGRAPHY FOR SECURE 5G AND IOT: LATTICE-BASED ENCRYPTION SCHEMES
  • Sep 1, 2025
  • ICTACT Journal on Communication Technology
  • Poomani M + 1 more

The impending advent of cryptographically relevant quantum computers threatens classical public-key primitives that underpin 5G and IoT security, including key exchange, authentication, and device onboarding. Ultra-dense networks, constrained endpoints, and long device lifetimes heighten exposure to “harvest-now, decrypt-later” risks. Mobile operators and IoT platform providers need migration ready cryptography that fits radio-access latency budgets, scales to billions of low-power nodes, and integrates cleanly with 3GPP and IETF protocols without degrading quality of service. Many post quantum options impose prohibitive bandwidth and compute costs or lack deployment guidance tuned to network slices and massive machine-type communications. We propose a lattice-based encryption and key-encapsulation framework grounded in Module-LWE/LWR assumptions. The design pairs an IND-CCA-secure KEM for control plane bootstrapping with lightweight AEAD for user-plane data, delivered through a hybrid handshake combining classical ECDH with a post-quantum KEM to ensure continuity during transition. Parameter tiers align with eMBB, URLLC, and mMTC device classes. Implementation emphasizes constant-time polynomial arithmetic, NTT-accelerated convolution, centered-binomial noise sampling, public-key compression, and stateless hash-based signatures for attestation. A gNB-assisted enrollment workflow and session-key rotation via 5G NAS/RRC are specified. Analytical modeling and prototype measurements indicate sub-millisecond encapsulation on ARM Cortex-M33 microcontrollers and ~1.5 ms on RAN baseband paths, while handshake message growth remains within existing NAS and RRC budgets. In ns-3 simulations of dense mMTC topologies, the hybrid handshake achieves >99.99% success under 1% packet loss, and energy profiling shows <5% battery impact for weekly rekeying. Security analysis demonstrates resistance to known lattice attacks at NIST Levels 3–5, forward secrecy via ephemeral KEMs, downgrade resistance through authenticated algorithm negotiation, and post compromise security with frequent rekeying.