Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Year Year arrow
arrow-active-down-0
Publisher Publisher arrow
arrow-active-down-1
Journal
1
Journal arrow
arrow-active-down-2
Institution Institution arrow
arrow-active-down-3
Institution Country Institution Country arrow
arrow-active-down-4
Publication Type Publication Type arrow
arrow-active-down-5
Field Of Study Field Of Study arrow
arrow-active-down-6
Topics Topics arrow
arrow-active-down-7
Open Access Open Access arrow
arrow-active-down-8
Language Language arrow
arrow-active-down-9
Filter Icon Filter 1
Export
Sort by: Relevance
  • Research Article
  • 10.21917/ijct.2025.0524
ADAPTIVE MULTI-FACTOR CO-EVOLUTIONARY ALGORITHM WITH LOCAL SEARCH FOR EFFICIENT CLUSTER HEAD SELECTION IN WIRELESS SENSOR NETWORKS
  • Jun 1, 2025
  • ICTACT Journal on Communication Technology
  • Minal Sudarshan Hardas + 1 more

In a lot of fields, like watching the environment, watching military bases, and developing smart cities, wireless sensor networks (WSNs) are particularly significant. One of the main difficulties with WSNs is how to use energy intelligently so that the network lasts as long as feasible. Using cluster-based topologies is a fantastic way to get the most out of energy utilization. In these topologies, a set of nodes in the network known as Cluster Heads (CHs) are responsible for communicating with the base station and other sensor nodes. It is still hard to choose a CH because WSNs are continually evolving and there are a lot of elements to worry about, like energy, coverage, and node density. Traditional approaches for choosing a CH don’t function well on heterogeneous and dynamic WSNs because they use static algorithms or single-factor optimization. To make the network more stable, energy-efficient, and long-lasting, we need an algorithm that can swiftly adjust to changes in the network and take a number of different things into account at once. The Adaptive Multi-Factor Co-Evolutionary Algorithm with Local Search, or AMCE-LS, is the main focus of this study. The approach has a co-evolutionary framework that uses a variety of adaptive fitness criteria to give nodes a score. The measures include coverage, the distance between nodes in the same cluster, the degree of each node, and the energy that is left behind. Adding a local search refinement step, which makes the CH selection even better, speeds up the convergence and makes the response better. Because it can monitor the network in real time, the adaptive technique can modify the weighting variables on the fly. The recommended AMCE-LS approach works better than earlier algorithms like LEACH, PSO, and DEEC when it comes to testing the durability of a network, its energy efficiency, and its packet delivery ratio. In dense node installations, the adaptive multi-factor technique can help networks persist and stay stable for up to 30% longer.

  • Research Article
  • 10.21917/ijct.2025.0522
PERFORMANCE ANALYSIS OF 5G-IOT MMWAVE NETWORK AT 38 GHZ FOR URBAN MICROCELL ENVIRONMENTS WITH ENHANCED SPECTRAL EFFICIENCY
  • Jun 1, 2025
  • ICTACT Journal on Communication Technology
  • Vithyalakshmi N + 3 more

The explosive growth of Internet of Things (IoT) devices necessitates high-throughput and low-latency communication infrastructure. The integration of 5G and millimeter-wave (mmWave) technologies at 38 GHz provides immense bandwidth potential but faces challenges in urban outdoor microcell environments due to signal attenuation, blockage, and interference. Despite its promise, 5G mmWave deployment in dense urban environments suffers from reduced spectral efficiency and connectivity loss due to non-line-of-sight (NLoS) conditions and environmental dynamics. This study investigates the performance of a 5G-IoT network at 38 GHz in a simulated urban microcell scenario using a ray-tracing-based channel model. A hybrid beamforming technique combined with spatial filtering is used to improve spectral efficiency. Simulations are conducted using MATLAB 5G Toolbox with parameters set to reflect realistic urban conditions, including user mobility and building obstructions. The proposed method demonstrates a 15-20% improvement in spectral efficiency compared to traditional beamforming methods, with throughput and SINR performance consistently outperforming existing schemes such as analog-only beamforming, conventional MIMO, and sectorized antenna models. The average spectral efficiency achieved is 9.2 bps/Hz under high user density.

  • Research Article
  • 10.21917/ijct.2025.0528
DESIGN AND MODELLING OF CIRCULARLY POLARIZED SLOT ANTENNA FOR 5G COMMUNICATION SYSTEMS
  • Jun 1, 2025
  • ICTACT Journal on Communication Technology
  • Ayyada Khalifa Mahmoud Faraj + 1 more

In recent years, the size of receivers and transmitters has decreased in size (mobile phones, laptops, and satellites) and others, making researchers work to reduce the antenna size to fit the size of these devices. In this paper, two antennas were designed to operate on one of the fifth generation’s frequencies, a frequency of (28 GHz). The antenna dimensions were modelled using the CST simulation software, and the results were obtained, compared with the antenna, which will be mentioned here. Several methods related to shrinking antennas have been reviewed, Among the methods chosen to participate in reducing the size of the antenna are changing the size and shape of the aperture, changing the thickness and type of the substrate. the methods examined help to improve antenna properties such as gain, directivity, and efficiency. In antenna number one, a substrate with a higher permittivity than that of the chosen antenna was used for comparison., and a U-shaped slot was created in the copper layer. as the Gain, Directivity and Efficiency of 5.227, 6.020 and 85.82 were achieved, respectively. As for antenna number two, a square-shaped opening was made, and a substrate was chosen with the same permittivity and thickness less than the antenna to be compared with. In addition, the antenna number two gave good results at thickness of 0.127, and satisfactory results with a thickness of 0.873. With a thickness of 0.127 the gain, Directivity and Efficiency of 5.975, 5.997 and 99.63% were achieved, respectively, while the results of a thickness of 0.837 Gain, Directivity and Efficiency were 1.77, 3.55 and 48.08%, respectively. A slot antenna was selected for comparison, which operated at the frequency (of 28 GHz), has a length of 7 mm, a width of 12 mm, and a height of 0.203 mm. A material Rogers RT/Duroid 4003 was used as the antenna substrate, with er = 3.55. The antenna number one, compared to the antenna above, has a length of 8 mm, a width of 8 mm, and a thickness of 0.127. In addition, the antenna was fed by microstrip technology, A material FR-4 was used as the antenna substrate, with permittivity er = 4.3. The antenna number two bore the exact same dimensions as the first one, however, the substrate has been changed to be Roger’s RT6035HTC material with permittivity er = 3.60. From the above, it is clear that the antenna size has been reduced by more than 45% at the first and second antennae.

  • Research Article
  • 10.21917/ijct.2025.0525
HYBRID ADAPTIVE FEATURE EXTRACTION FOR IOT-ENABLED ECG SIGNAL ANALYSIS IN SMART HEALTH MONITORING SYSTEMS
  • Jun 1, 2025
  • ICTACT Journal on Communication Technology
  • Karthiga S + 1 more

Internet of Things (IoT) has transformed healthcare systems in a huge manner since it allows doctors keep a check on patients’ health in real time, especially those with cardiac problems. Electrocardiographic (ECG) data are particularly important for detecting cardiovascular issues early on. Electrocardiograms are sensitive to noise and distortions, which can make it hard to undertake an analysis that is both quick and accurate. The tools we have now for looking at ECGs either have too many steps or aren’t accurate enough. This is because the ways these systems get features are either fixed or not very deep. These limits make it tougher to keep an eye on things in real time, which slows down speedy diagnosis and makes it harder to utilize on IoT devices that don’t have a lot of resources. The results of this study show that it could be a good idea to use a Hybrid Adaptive Feature Extraction (HAFE) method in an IoT architecture to handle ECG inputs. The HAFE additionally has statistical analysis for reducing features, adaptive signal decomposition using empirical mode decomposition (EMD), and time-frequency localization with discrete wavelet transform (DWT). We employ a convolutional neural network (CNN) that is set up to work on the edge to sort these properties. The system can execute analytics in real time because it runs on a Raspberry Pi 3 computer and is backed up by the cloud. For instance, it was 98.6% accurate, 97.9% sensitive, and took 1.7 seconds to make a prediction.

  • Research Article
  • 10.21917/ijct.2025.0521
LATENCY-AWARE WIRELESS TRANSMISSION OF CLOUD DATA IN CLOUD AVOIDANCE NETWORKS USING A DUAL-HEAD ENSEMBLE TRANSFORMER
  • Jun 1, 2025
  • ICTACT Journal on Communication Technology
  • Jasmine J + 1 more

With the growing reliance on cloud-based services, latency in wireless transmission of cloud data remains a critical challenge in environments where cloud connectivity is intermittent or restricted, such as cloud avoidance networks (CANs). These networks operate under the paradigm of minimizing dependence on centralized cloud infrastructure, leveraging edge or fog computing for timely data processing and delivery. Traditional transmission frameworks often fail to effectively manage latency in dynamic and decentralized networks. In CANs, unpredictable node mobility, variable signal strength, and heterogeneous device capabilities exacerbate delays. There is a lack of robust machine learning-based solutions that adaptively optimize routing and data transmission strategies in real time to reduce latency. This study introduces a Dual-Head Ensemble Transformer (DHET) model tailored for latency-aware wireless transmission in CANs. The first head of the transformer predicts short-term transmission latency across multiple hops based on real-time network conditions, while the second head assesses the reliability of the path by evaluating historical trends and signal consistency. Ensemble learning is used to fuse predictions from diverse transformer sub-models trained on varied wireless scenarios, ensuring generalized performance. A latency-prioritized routing algorithm then utilizes these predictions to dynamically select optimal paths for cloud data transmission. Simulation results demonstrate that the DHET-based approach achieves an average 18–25% reduction in end-to-end latency compared to baseline protocols such as AODV and DSR. The dual-head design allows for a balance between latency minimization and transmission stability, making it well-suited for CAN deployments in smart cities, autonomous fleets, and remote monitoring systems.

  • Open Access Icon
  • Research Article
  • 10.21917/ijct.2025.0517
ENHANCING EFFICIENCY IN WIRELESS SENSOR NETWORKS THROUGH CLUSTERING AND ROUTING OPTIMIZATION
  • Mar 1, 2025
  • ICTACT Journal on Communication Technology
  • Jaison L + 1 more

This research aims at incorporating Wireless Sensor Networks (WSNs) with deep learning to enable farmers to receive early notification about their produce. It presents the structured combined model which is used to solve the problem of energy consumption and the problem of reliability of communication in WSNs. The component include are network model, energy consumption model, a cluster head selection using k-medoids algorithm and route optimization using adaptive sail fish (AS) algorithm. To improve the route performance, the framework has both Deep Feedforward Neural Network (DFFNN) and the Buffalo Algorithm (BA). The WSN comprises the Base Station (BS), low- performance (LP) sensors and High-performance (HP) sensors with the HP sensors performing the duty of the Cluster Heads. Energy consumption is assumed proportional with the data transmission distance and path loss; K-Medoids clustering optimizes efficiency of communication and minimizes power utilization. Some of the results of analysis in MATLAB R2023b show that the proposed model provides enhancements in terms of energy efficiency, residual energy, and packet delivery ratio through simulations.

  • Research Article
  • 10.21917/ijct.2025.0513
ENHANCED INTRUSION DETECTION AND PREVENTION IN WIRELESS SENSOR NETWORKS USING HYBRID DEEP LEARNING
  • Mar 1, 2025
  • ICTACT Journal on Communication Technology
  • Balajishanmugam V + 2 more

Wireless Sensor Networks (WSNs) are highly vulnerable to security threats due to their decentralized nature, constrained resources, and open communication channels. Traditional intrusion detection and prevention systems (IDPS) often struggle to provide real-time protection while maintaining network efficiency. The increasing complexity of cyberattacks necessitates advanced techniques for threat mitigation. A major challenge in WSN security is the detection of sophisticated intrusions with high accuracy while minimizing false positives and computational overhead. Conventional rule-based and anomaly-based detection methods exhibit limitations in identifying emerging threats due to their reliance on predefined signatures and static models. Addressing these gaps, a hybrid deep learning-based IDPS is proposed, integrating Convolutional Neural Networks (CNNs) for feature extraction and Long Short-Term Memory (LSTM) networks for sequential pattern learning. The hybrid model is trained on a benchmark WSN intrusion dataset and optimized using the Adam optimizer to enhance detection performance. Experimental evaluation shows that the proposed model achieves an intrusion detection accuracy of 98.6%, significantly outperforming traditional machine learning approaches such as Support Vector Machines (SVM) (91.2%) and Random Forest (94.8%). The system also reduces false positive rates to 1.8%, ensuring reliable threat identification. Moreover, real- time implementation exhibits an average detection latency of 0.35 seconds, making it suitable for resource-constrained WSN environments. These results indicate that the hybrid CNN-LSTM model effectively enhances the security of WSNs, providing a robust defense against evolving cyber threats.

  • Open Access Icon
  • Research Article
  • 10.21917/ijct.2025.0509
QUANTUM CRYPTOGRAPHY WITH ESPRESSO CIPHERS AND GRAIN FOR ENHANCED SECURITY IN OPTICAL COMMUNICATION NETWORKS
  • Mar 1, 2025
  • ICTACT Journal on Communication Technology
  • Mohamed Musthafa M + 2 more

Securing optical communication networks against evolving cyber threats necessitates advanced cryptographic techniques. Quantum cryptography offers an unbreakable security framework, leveraging the principles of quantum mechanics. However, integrating quantum cryptography with lightweight and efficient stream ciphers remains a challenge in high-speed optical networks. Traditional encryption methods, such as AES and RSA, struggle to meet the real-time demands of optical communication due to computational overhead and vulnerability to quantum attacks. This study introduces a hybrid security framework incorporating Quantum Key Distribution (QKD) with Espresso ciphers and the Grain stream cipher to enhance security and efficiency in optical networks. The Espresso cipher, known for its ultra-lightweight design and energy efficiency, ensures minimal computational complexity, while the Grain stream cipher provides robust resistance against differential and linear cryptanalysis. The integration with QKD ensures unconditional security through quantum properties such as no-cloning and Heisenberg’s uncertainty principle. Performance evaluation was conducted using a 100 Gbps optical network, demonstrating a significant reduction in encryption latency by 37% compared to conventional AES-based encryption. The proposed framework also achieved an 89.6% improvement in key generation efficiency and reduced computational overhead by 41.3%. Furthermore, resistance against brute-force and side-channel attacks was significantly enhanced, providing a secure and efficient cryptographic solution for high-speed optical networks.

  • Research Article
  • 10.21917/ijct.2025.0512
OPTIMIZING SATELLITE COMMUNICATIONS USING ADVANCED ALGORITHMS FOR IMPROVED SIGNAL PROCESSING AND DATA TRANSMISSION EFFICIENCY
  • Mar 1, 2025
  • ICTACT Journal on Communication Technology
  • Gugan I + 2 more

Efficient satellite communication is critical for ensuring seamless data transmission across various applications, including remote sensing, defense, and global connectivity. Traditional signal processing techniques face challenges such as signal degradation, interference, and bandwidth limitations, reducing overall transmission efficiency. Advanced optimization algorithms can enhance signal integrity, mitigate noise, and improve data throughput. This study proposes an adaptive hybrid optimization framework integrating Deep Learning- based Channel Estimation (DL-CE) with an Enhanced Error Correction Model (EECM). The DL-CE employs a Convolutional Neural Network (CNN) combined with a Recurrent Neural Network (RNN) to predict channel variations dynamically, reducing transmission errors by 32.5%. Meanwhile, the EECM incorporates Low-Density Parity-Check (LDPC) codes optimized using a Genetic Algorithm (GA) to enhance error correction efficiency, leading to a 27.8% reduction in bit error rate (BER) compared to conventional LDPC codes. Experimental evaluations on real-time satellite transmission datasets demonstrate a 21.3% improvement in spectral efficiency and a 36.4% enhancement in data throughput. Comparative analysis with traditional Orthogonal Frequency-Division Multiplexing (OFDM) and Turbo coding-based error correction confirms that the proposed method achieves a lower BER of 1.02 × 10?³, higher peak signal-to-noise ratio (PSNR) of 42.8 dB, and increased data transmission speed of 1.8 Gbps.

  • Research Article
  • 10.21917/ijct.2025.0510
MULTI-KEY HOMOMORPHIC ENCRYPTION FOR PRIVACY-PRESERVING SECURITY IN 5G AND BEYOND WIRELESS NETWORKS
  • Mar 1, 2025
  • ICTACT Journal on Communication Technology
  • Devakirubai N

The rapid expansion of 5G and beyond wireless networks has introduced new security challenges, particularly in preserving data privacy while enabling secure computations on encrypted data. Traditional encryption schemes fail to provide efficient computation without decryption, making them unsuitable for modern wireless environments with stringent privacy requirements. Multi-Key Homomorphic Encryption (MKHE) emerges as a viable solution, allowing multiple users to encrypt data with distinct keys while still enabling joint computation on the ciphertexts. This study proposes an optimized MKHE framework tailored for 5G and beyond wireless networks, addressing computational overhead and communication latency. The proposed method incorporates an adaptive key management mechanism and lightweight ciphertext aggregation to enhance efficiency. Experimental results demonstrate a 23.7% reduction in encryption time, a 19.4% improvement in computational efficiency, and a 15.8% decrease in communication overhead compared to conventional MKHE implementations. Additionally, the scheme maintains a high level of security, resisting key-recovery and chosen-ciphertext attacks.