- Research Article
- 10.21917/ijme.2025.0337
- Apr 1, 2025
- ICTACT Journal on Microelectronics
- Puvaneswari G
Assuring the efficiency and reliability of electronic systems relies heavily on finding parametric faults in analog circuits, which is especially important in highly precise sectors. A typical problem with conventional fault detection algorithms when working with high-dimensional feature sets is the risk of over fitting, which can lead to higher computing costs and more acute accuracy. In response to these difficulties, this research presents a method, the High-Dimensional Parametric Optimized Approach (H-DPOA), which combines a Support Vector Machine model with a feature selection based on feature importance measure. By lowering the dataset’s dimensionality and making features more significant in fault classification, the H-DPOA improves the model’s performance and interpretability. The features that have been selected are subsequently utilized to sequence an improved support vector machine model, which aims to improve the detection accuracy of minor parametric errors while reducing the number of false positives. By analyzing benchmark analog circuit datasets extensively in simulation, the suggested method is proven to be valid. With less computing time required than conventional approaches, the results show that H-DPOA greatly improves fault identification rates. In areas where accurate fault identification is vital for preserving system integrity, such as telecommunications, industrial automation systems, and automotive electronics, this approach finds key application in fault diagnosis. The findings suggest that H-DPOA is a viable and scalable approach to diagnosing analog circuits, which could lead to research into automated fault detection and predictive maintenance for electronic systems.
- Research Article
1
- 10.21917/ijme.2025.0339
- Apr 1, 2025
- ICTACT Journal on Microelectronics
- Chittaranjan Mohapatra + 1 more
Given a set of pins and obstacles in a Very-Large-Scale Integration (VLSI) chip layout, the goal is to develop an optimal routing path with minimal wire length. This work construct Obstacle Avoidance Rectilinear Steiner Minimal Tree (OARSMT) using a deep Q-learning approach, a type of reinforcement learning. It employs union-find data structure, parallel Deep Q-Network and Adam optimizer to train an agent to determine the optimal connection between pins. The DQN approximates Q-values, which reflect the likelihood of selecting an edge. Connections with higher Q-values are those that are obstacle-free, have lower weight values, and favors connections that share common paths. The DQN takes the help Kruskal’s algorithm to construct a rectilinear steiner tree with the above connection constraints. The approach uses multi-threading during the training to handle large datasets. The proposed model returns shorter wire lengths with improvement of 5% for obstacle-based benchmark data. The model also achieves 9.8% less training time on an average due to the parallelization of the DQN. The proposed approach realizes an 85.3 % increase in reward gain than other approaches. The developed method achieved the objective and can attain superior performance not only in VLSI physical design but also in various obstacle based routing.
- Research Article
- 10.21917/ijme.2025.0338
- Apr 1, 2025
- ICTACT Journal on Microelectronics
- Mahendrakan K + 3 more
The Power Consumption (PC) of VLSI circuits is a crucial reason that requires careful consideration especially for packages that must have the lowest possible strength. Power dissipation in Flip-Flops (FF) and clock distribution networks must be minimized because contemporary portable digital circuits have a very constrained strength budget. Additionally, because of the limited time finances at high frequency operation turn-flop latency must be minimized. Thus it is crucial to design with low latency and electricity intake in mind when using current VLSI generation. A processors clocking mechanism is mainly made up of clock supply networks and Flip-Flops (FF). Because the alternative clock part is designed for data processing the traditional single-phase clock FFs method introduces statistics by using the best clock part at a time causing a redundant strength overhead. Dual edge-triggering (DET) FFs use each clock edge (CE) to technique facts permitting them to cut the clock frequency in half by preserving throughput. To address these issues a useful dual-edge-triggering (DET) FF that eliminates clock redundant transitions (RTs) entirely and improves performance through feel amplification is proposed. The first of its kind to totally eliminate the clock and inner redundant switching is the proposed DET FF. FF design employs zero redundant transition (RT) single-transistor-clocked (STC). A sensing amplifier primarily based flip-flop (SAFF) that can operate dependably over a broad voltage and temperature range is included in this painting. Similar to a differential sensing stage turn flops that are primarily based on feel amplifiers have a slave latching degree. The purpose of the sensing degree is to report information at the rising Edge (RE)and falling edges (FE) of the clock while the sense amplifiers output is sustained for the duration of the clocks effective half cycle. Consequently, the size restrictions associated with traditional pulse-precipitated flip-flops are eliminated. SAFF (Sense-Amplifier FF) has various capabilities such as reduced clock load shorter hold intervals and a poor or almost zero setup time. SAFFs outperform pulse-induced turn flips and master slave flip flips when it comes to low voltage operation. Utilizing 22 nm CMOS technology the cautioned hybrid layout is designed using the MICROWIND device. Power-delay-product (PDP) and proximity electricity delay are compared between the current DET designs and the proposed design.
- Research Article
- 10.21917/ijme.2025.0342
- Apr 1, 2025
- ICTACT Journal on Microelectronics
- Joby Titus
Modern CMOS processors face a significant challenge in power consumption, primarily due to switching and leakage power. With rising demands for energy-efficient systems, especially in mobile and IoT devices, managing dynamic and static power has become essential. Conventional clock gating techniques lack adaptability to workload variability, leading to inefficient power savings. Fixed gating schemes either over-constrain performance or underperform in power savings. This work proposes a Low-Power Reconfigurable Match Table (RMT)- based Clock Controller that dynamically adjusts clock gating granularity based on real-time workload profiling. The system leverages a match table reconfiguration mechanism, enabling fine- grained control of clock signals to idle submodules. Implemented in a 45nm CMOS processor simulation environment, this approach combines workload prediction and table-driven reconfiguration for minimal leakage and switching overhead. Simulation results show a 38.6% reduction in switching power and a 29.3% reduction in leakage power compared to traditional fixed clock gating, with only 1.2% performance overhead. Power savings remain consistent across varied computational loads.
- Research Article
- 10.21917/ijme.2025.0348
- Apr 1, 2025
- ICTACT Journal on Microelectronics
- Anooja V.s + 3 more
The proliferation of the Internet of Things (IoT) has revolutionized the way devices interact, share data, and respond to real-time stimuli. Embedded IoT systems offer low-power, high-efficiency solutions for a variety of domains such as smart homes, healthcare, agriculture, and industrial automation. Despite advancements, achieving seamless connectivity and real-time intelligence in embedded IoT remains challenging due to limited computational power, energy constraints, and fragmented communication protocols. These limitations hinder performance, scalability, and responsiveness in real-world deployments. This research proposes a hybrid edge-cloud framework utilizing lightweight embedded devices integrated with optimized firmware for real-time processing, adaptive sensor data management, and low-latency communication. The method leverages MQTT protocol for lightweight messaging and integrates TinyML models on microcontrollers for localized intelligence, reducing reliance on centralized cloud services. The proposed system was tested using simulations in MATLAB and real-world deployments using Raspberry Pi 4 and ESP32 devices. Compared with existing models (CoAP-based IoT, MQTT without ML, Edge-Only, and Cloud-Only), the hybrid framework improved latency by 35%, energy efficiency by 27%, and inference speed by 42%, with minimal compromise on accuracy. The results validate the model’s scalability, responsiveness, and real-time intelligence for embedded IoT environments.
- Research Article
- 10.21917/ijme.2025.0346
- Apr 1, 2025
- ICTACT Journal on Microelectronics
- Karthick M + 1 more
The increasing demand for high-performance wireless communication systems necessitates advanced antenna technologies that can ensure maximum signal strength without breaching regulatory standards. Conventional antenna systems often suffer from limited directionality and signal loss due to environmental interference and suboptimal configurations, affecting signal quality and system reliability. This study proposes a novel multi-dimensional antenna system designed using a hybrid optimization algorithm integrating Particle Swarm Optimization (PSO) and Genetic Algorithm (GA). The antenna configuration dynamically adjusts its orientation and radiation pattern to enhance signal reception within allowable electromagnetic radiation limits. Simulations using CST Microwave Studio demonstrate a 27.3% increase in signal strength, 18.5% improvement in signal-to-noise ratio (SNR), and a 15.7% reduction in bit error rate (BER) compared to traditional planar and phased-array antennas. The gain achieved is 13.4 dBi with a beamwidth reduction of 22.6%.
- Research Article
- 10.21917/ijme.2025.0341
- Apr 1, 2025
- ICTACT Journal on Microelectronics
- Kayalvizhi K + 1 more
With the increasing demand for high-speed and reliable communication in 5G networks, efficient antenna design for the sub-6 GHz band has become a crucial research area. Slotted patch antennas offer significant advantages such as compact size and wide bandwidth, making them suitable for 5G-enabled devices. However, achieving high gain, broad bandwidth, and stable radiation characteristics within the compact form factor remains challenging, especially in the 3.5–4.5 GHz range used for sub-6 GHz 5G. This work presents a novel slotted hexagonal patch antenna structure designed to operate efficiently within the 3.3–4.2 GHz frequency range. The design introduces a unique hexagonal geometry with symmetrical slotting and ground plane optimization to enhance return loss, bandwidth, and gain. Simulation using HFSS yielded a peak gain of 5.3 dBi, a return loss of -32 dB at 3.8 GHz, and a bandwidth of 850 MHz. The design also achieved a radiation efficiency of 92.4%, and a VSWR of 1.1.
- Research Article
- 10.21917/ijme.2025.0347
- Apr 1, 2025
- ICTACT Journal on Microelectronics
- Amudha R + 3 more
Precision agriculture increasingly depends on real-time monitoring and automation to enhance crop yield and reduce resource waste. Traditional irrigation systems either overuse or underutilize water due to lack of real-time sensing and decision-making, leading to poor water resource management. This research proposes a real-time embedded system using a NodeMCU ESP8266 microcontroller integrated with DHT11 (humidity and temperature), soil moisture sensors, and a solenoid valve to implement smart irrigation. The algorithm evaluates sensor data to trigger irrigation only when soil moisture falls below a defined threshold. Simulations in the Proteus environment and real-time tests demonstrated a 38.5% water savings and a 24.2% increase in yield efficiency compared to traditional systems. The proposed system also outperformed existing fuzzy logic and timer-based irrigation systems in energy efficiency and response time.
- Research Article
- 10.21917/ijme.2025.0326
- Jan 1, 2025
- ICTACT Journal on Microelectronics
- Jagadeeswari N + 2 more
The rapid evolution of electronic embedded systems (EES) has brought significant challenges in optimizing their performance in real-time environments. These systems are often deployed in critical applications, such as automotive, medical, and IoT devices, where efficient resource management and adaptive decision-making are essential for optimal performance. Traditional optimization methods struggle to meet the dynamic and complex demands of modern embedded systems. As the complexity of electronic embedded systems increases, ensuring real- time performance while minimizing energy consumption, latency, and operational costs becomes more difficult. Static configurations or conventional algorithms cannot adapt quickly to changing conditions, leading to suboptimal performance. This problem is further exacerbated by the need for fast decision-making within limited computational resources. This study proposes using Deep Reinforcement Learning (DRL) algorithms to optimize the real-time performance of electronic embedded systems. DRL leverages an agent- based approach to autonomously learn optimal strategies through trial and error in dynamic environments. The proposed method involves training a DRL model to intelligently manage system resources, adjust parameters, and enhance decision-making in real-time based on feedback from the system’s environment. Key DRL techniques, such as Deep Q-Networks (DQN) and Proximal Policy Optimization (PPO), are utilized to train agents in various system scenarios. The results show that DRL-based optimization significantly improves system efficiency, leading to reduced latency, enhanced throughput, and optimized power consumption without compromising the system’s responsiveness. The proposed method outperforms traditional optimization approaches, particularly in highly dynamic and resource-constrained environments, by enabling continuous adaptation to changing operational conditions.
- Research Article
- 10.21917/ijme.2025.0334
- Jan 1, 2025
- ICTACT Journal on Microelectronics
- Rajalakshmi J + 3 more
Microstrip circuits form the backbone of modern high-frequency communication systems, offering compact and efficient solutions for signal processing and transmission. However, the design of these circuits is challenging due to the intricate interplay of electromagnetic (EM) parameters, material properties, and circuit dimensions. Traditional EM simulation methods, while accurate, are computationally intensive and time-consuming, limiting their applicability for rapid prototyping and optimization. To address these challenges, this study integrates deep learning techniques with electromagnetic simulations to enhance microstrip circuit design efficiency. A Recurrent Neural Network (RNN)-based framework is proposed to predict the frequency-dependent behavior of microstrip circuits, leveraging temporal data from iterative EM simulations. The RNN model is trained on a diverse dataset of simulated circuit configurations, capturing the relationships between physical parameters, design constraints, and performance metrics. The proposed approach significantly reduces computational overhead by approximating the results of full-wave EM simulations while maintaining high accuracy. Validation against benchmark EM simulation tools shows that the RNN model achieves over 95% prediction accuracy with a 70% reduction in simulation time. Additionally, this framework enables real-time optimization of circuit designs, accelerating the iterative design process without compromising performance.