Articles published on Built-in self-test
Authors
Select Authors
Journals
Select Journals
Duration
Select Duration
1350 Search results
Sort by Recency
- Research Article
- 10.3390/jimaging12010031
- Jan 7, 2026
- Journal of Imaging
- Haiying Liu + 5 more
To address the unreliable autofocus problem of drone-mounted visible-light aerial cameras in low-contrast maritime environments, this paper proposes an autofocus system that combines deep-learning-based coarse focusing with traditional search-based fine adjustment. The system uses a built-in high-contrast resolution test chart as the signal source. Images captured by the imaging sensor are fed into a lightweight convolutional neural network to regress the defocus distance, enabling fast focus positioning. This avoids the weak signal and inaccurate focusing often encountered when adjusting focus directly on low-contrast sea surfaces. In the fine-focusing stage, a hybrid strategy integrating hill-climbing search and inverse correction is adopted. By evaluating the image sharpness function, the system accurately locks onto the optimal focal plane, forming intelligent closed-loop control. Experiments show that this method, which combines imaging of the built-in calibration target with deep-learning-based coarse focusing, significantly improves focusing efficiency. Compared with traditional full-range search strategies, the focusing speed is increased by approximately 60%. While ensuring high accuracy and strong adaptability, the proposed approach effectively enhances the overall imaging performance of aerial cameras in low-contrast maritime conditions.
- Research Article
- 10.59277/rrst-ee.2025.4.15
- Nov 17, 2025
- REVUE ROUMAINE DES SCIENCES TECHNIQUES — SÉRIE ÉLECTROTECHNIQUE ET ÉNERGÉTIQUE
- Devi Poonguzhali Singaravelu Devi Poonguzhali + 3 more
This paper proposes a new method for verifying arithmetic circuit operations based on the Vedic mathematics Sutra (formulae) “Gunita Samuccaya”. According to this sutra, our proposed method verifies arithmetic operations, e.g., c = a + b, by checking whether the sum of 'a' and 'b' digits equals the sum of digits of 'c' for correct computation. In contrast to built-in self-test (BIST) schemes, our approach is simpler, eliminating traditional test pattern generators and output analyzers while achieving 100% fault coverage for simple arithmetic operations. Our system, designed in Verilog hardware description language (HDL), is real-time, memoryless, and scalable. This proposed testing method revolutionizes arithmetic circuit verification, guaranteeing the integrity of intricate digital systems where mathematical precision is vital.
- Research Article
- 10.64751/ijdim.2025.v4.n4.pp351-355
- Nov 5, 2025
- International Journal of Data Science and IoT Management System
- Dr.Ravi Bolimera + 4 more
With the rapid growth of Integrated Circuits (ICs) technology, the complexity of the circuits has also increased. As a result, the complexity of the circuit demands self-testability in hardware to mitigate the product failure. Built-in-self-test (BIST) is such a technique which can meet the demand of self-testability with an effective solution over costly circuit testing system. This paper represents designing and implementation of a Universal Asynchronous Receiver Transmitter (UART) with self-testing ability. In order to attain compact, stable and reliable data transmission, the UART is designed with Verilog HDL language and synthesized on Spartan2 FPGA. Here, the Baud Rate of the UART is 4 Mbps. This UART also utilizes the RS-422 standard.
- Research Article
- 10.52783/cana.v32.6105
- Oct 4, 2025
- Communications on Applied Nonlinear Analysis
- Sandeep Santosh
In today's VLSI SoC designs, embedded memories dominate die area and often limit manufacturing yield. Design-for-Testability (DFT) is crucial for detecting manufacturing defects early and ensuring robust design validation. Among DFT techniques, Memory Built-In Self-Test (MBIST) plays a pivotal role by embedding test and repair logic within memory subsystems. MBIST facilitates detection of memory faults such as stuck-at, transition, coupling, and retention faults, using March algorithms and fault models.
- Research Article
- 10.38124/ijisrt/25aug1309
- Sep 4, 2025
- International Journal of Innovative Science and Research Technology
- Ahmed Salahuddin Suhaib + 1 more
The Memory Built-In Self-Test (MBIST) is the standard for testing dense embedded memories that dominate modern SoCs; however, a critical trade-off exists between the test time and fault coverage. While comprehensive algorithms such as March C- (10n) are slow, faster algorithms such as MATS++ (6n) are often preferred, although both aim to detect critical Address Decoder Faults (AFs). This study presents an MBIST controller employing a novel March (5n) algorithm that bridges this gap, offering robust fault coverage with superior efficiency. The core innovation of the algorithm is the "address-as-data" paradigm, which uses the memory address (a) and its bitwise complement (~a) as test patterns to efficiently detect Stuck-at (SAF), Transition (TF), and Address Decoder (AF) faults. The proposed FSM-based controller has been designed in Verilog and validated on a Xilinx Zynq-7000 series FPGA platform. Experimental evaluation demonstrates that the March (5n) algorithm achieves significant reductions in test time compared to established approaches, with minimal resource overhead. These findings highlight the effectiveness of the March (5n) algorithm in achieving a balanced trade-off between speed and fault coverage, positioning it as a practical candidate for deployment in high-volume, cost-sensitive applications.
- Research Article
- 10.48175/ijarsct-28675
- Aug 21, 2025
- International Journal of Advanced Research in Science, Communication and Technology
- Sandeep Gupta
The low power consumption is a critical demand in the design and testing of the modern VLSI architecture and a System on a Chip (SoC). Under a test mode, untamed switching activities can make the power requirements of the device far higher than that of being functional thereby creating a lot of thermal stress and even damaging the circuit and making tests expensive in the process. Several power-conscious test methods have been developed to counter these issues and they target low switching activity testing and regulation of peak power without sacrificing fault coverage. They encompass state-of-the-art test vector optimization including: vector reordering, compression, and X-plus scan chain reordering and clock gating plans. Energy efficiency is further increased with low-power Built-In Self-Test (BIST) design and with adaptive testing whose accuracy depends on monitoring of real-time power. Scalable and power-efficient tests with large and complex systems are possible through hierarchical and modular design Design-for-Test (DFT) methodologies. Real-time power adaptation can be done through techniques such as Dynamic Voltage and Frequency Scaling (DVFS) and AI-based algorithms and metaheuristic methods can be used to plan tests and optimize testing. This review expounds on such options in detail, with synthesis and power management as a major enabling factor in robust, scalable, and energy-efficient testing in next-generation digital systems
- Research Article
2
- 10.3390/jlpea15030047
- Aug 15, 2025
- Journal of Low Power Electronics and Applications
- Geethu Remadevi Somanathan + 2 more
This paper presents a power-aware Reconfigurable Parameterizable Pseudorandom Pattern Generator (RP-PRPG) for a number of applications, including built in self-testing (BIST) and cryptography. Linear Feedback Shift Registers (LFSRs) are broadly utilized in pattern generation due to their efficiency and simplicity. However, the diversity of generated patterns, as well as their power consumption, improves through circuit modifications. This work explores enhancements to LFSR structures to achieve broader range of patterns with reduced power consumption for BIST-based applications. The proposed circuit constructed on the LFSR platform can be programmed to generate patterns with varying degrees of different LFSR configurations. Diverse set of patterns of any circuit arrangement can be created using any characteristic polynomial and by utilizing the reseeding capacity of the circuit. The circuit combines a double-tier linear feedback circuit with zero forcing methods, resulting in more than 70% transition reduction, thus significantly lowering power dissipation. The behaviour of the proposed circuit is assessed for characteristic polynomials with degrees ranging from 4 to 128 using various Linear Feedback Shift Register (LFSR) topologies. For reconfigurable HDL and ASIC synthesis, the power-aware RP-PRPG can be used to generate an efficient set of stream ciphers as well as applications involving the scan-for-test protocol.
- Research Article
1
- 10.1109/tcad.2025.3536384
- Aug 1, 2025
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
- Irith Pomeranz
In-field testing is important for detecting defects that escaped manufacturing tests or occurred during the lifetime of a chip. When in-field testing is performed periodically, some of the test periods may be shorter than others. Short test periods should focus on the faults that are the most likely to occur with aging, whereas long test periods can apply a more comprehensive test set. This article studies this scenario in the context of a logic built-in self-test (LBIST) approach that partitions compressed tests into subvectors for on-chip storage, and combines subvectors into compressed tests on-chip using counters. This approach has low storage requirements, allows complete fault coverage to be achieved, and uses a moderate number of tests. The problem of applying a small number of tests during a short testing period is formulated as a static problem of rearranging the subvectors (with possible repetitions and modification) such that the first <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$n_{1}$ </tex-math></inline-formula> subvectors are sufficient for detecting a subset of faults <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$F_{1}$ </tex-math></inline-formula>, and <inline-formula xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink"> <tex-math notation="LaTeX">$n_{1}$ </tex-math></inline-formula> is as small as possible. Experimental results for benchmark circuits in an academic environment demonstrate the number of tests and overall storage requirements.
- Research Article
- 10.3390/electronics14142803
- Jul 11, 2025
- Electronics
- Yuseok Jeon + 1 more
In this paper, RF sub-modules with millimeter-wave functionality are considered and verified for designing an ultra-wideband receiver (18–40 GHz) required in the electronic support measure (ESM) field. The pre-design of an ultra-wideband super heterodyne receiver (SHR) requires a front-end module (FEM) with four units in the system. Each FEM has four channels with the same path, while the quadrature millimeter down-converter (QMDC) needs to have a converting function that uses a broadband mixer. The FEM includes the ability to provide built-in test (BIT) path functionality to the antenna ports prior to system field installation. Each path of the QMDC requires the consideration of several factors, such as down-converting, broadband gain flatness, and high isolation. As this is an RF module requiring high frequency and wideband characteristics, it is necessary to identify risk factors in advance within a predictable range. Accordingly, the blind-mate A (BMA) connector connection method, the phase-alignment test method in the down-conversion structure, and the LO signal, IF path inflow-blocking method were analyzed and designed.
- Research Article
1
- 10.1007/s13198-025-02858-6
- Jul 1, 2025
- International Journal of System Assurance Engineering and Management
- Peter Söderholm + 1 more
Abstract The purpose of this paper is to describe a risk-based scenario analysis, by use of an Event Tree Analysis approach, which will support testability considerations and test level integration in the design or dependability improvement phases of technical systems. The proposed scenario analysis includes fault recognition and fault localization efforts and their associated hazards of false alarms, unrecognized faults, and un-localized faults. The combination of these single hazards into the hazard of NFF events and the consequences to safety, dependability, and cost are also discussed. It should be noted that the paper focuses on Built-in-Test (BIT), i.e., the capability of a test system of performing automatic fault recognition and fault localization. The hardware aspects, related to Built-in-Test Equipment (BITE), are not specifically considered. An earlier version of this paper has previously been presented at the IAI2023 Congress.
- Research Article
- 10.52953/zbpe9349
- Jun 25, 2025
- ITU Journal on Future and Evolving Technologies
- Indhuja Gudluru + 2 more
Vision Transformers (ViTs) have evolved in the field of computer vision by transitioning traditional Convolutional Neural Networks (CNNs) into attention-based architectures. This architecture processes input images as sequences of patches. ViTs achieve enhanced performance in many tasks such as image classification and object detection due to their ability to capture global dependencies within input data. While their software implementations are widely adopted, deploying ViTs on hardware introduces several challenges. These include fault tolerance in the presence of hardware failures, real-time reliability, and high computational requirements. Permanent faults that are in processing elements, interconnections, or memory subsystems lead to incorrect computations and degrading system performance. This paper proposes a fault-tolerant hardware implementation of ViTs to overcome these challenges. This hardware implementation integrates real-time fault detection and recovery mechanisms. The architecture includes four primary units: patch embedding, encoder, decoder, and Multi Layer Perceptron (MLP) which are supported by fault-tolerant components such as lightweight recompute units, a centralized Built-In Self-Test (BIST), and a learning-based decision-making system using machine learning model 'decision tree'. These units are interconnected through a centralized global buffer for efficient data transfer, ensuring seamless operation even under fault conditions.
- Research Article
- 10.37745/ejcsit.2013/vol13n514064
- Jun 15, 2025
- European Journal of Computer Science and Information Technology
- Jayesh Kumar Pandey
Automotive and AI platforms are placing unprecedented demands on semiconductor reliability, uptime, and fault tolerance in an era where even momentary malfunctions can lead to catastrophic consequences. Traditional Design-for-Test (DFT) and Built-In Self-Test (BIST) methods—once sufficient for manufacturing validation—have evolved into sophisticated predictive test architectures capable of anticipating and preempting failures before they manifest at the system level. These advanced frameworks represent a fundamental paradigm shift from reactive to proactive fault management, incorporating continuous monitoring capabilities that track subtle parametric shifts indicative of emerging reliability issues. Runtime diagnostics now operate transparently alongside functional workloads, leveraging idle computational resources to execute targeted validation sequences without disrupting critical operations. AI-enhanced test analytics process vast quantities of telemetry data to identify complex correlations between operational parameters and potential failure modes, often detecting precursors to hardware failures hours or days before functional manifestation. Safety-aware self-test mechanisms implement hierarchical validation strategies with graduated test intensity based on operational context, concentrating resources on high-risk scenarios while minimizing overhead during normal operation. With a focus on real-time fault detection, comprehensive health monitoring, and rigorous compliance with functional safety standards like ISO 26262, these predictive test architectures are reshaping semiconductor validation and maintenance practices across multiple industries. The integration of explainable AI techniques further enhances deployment viability by providing transparency into prediction rationales, addressing critical requirements for regulatory approval in safety-critical applications. Through sophisticated on-chip sensors, adaptive testing schedules, and intelligent fault recovery mechanisms, predictive test architectures enable mission-critical systems to maintain essential functionality even when significant hardware. degradation occurs
- Research Article
1
- 10.37394/232014.2025.21.9
- May 16, 2025
- WSEAS TRANSACTIONS ON SIGNAL PROCESSING
- Radoslav Vasilev + 2 more
The present work is part of the development process of a distributed platform for an intelligent mobile robot, with the proposed vision module intended for future integration into the overall architecture. Thanks to its built-in simulation testing capabilities, the functionality of the module can be validated even in the absence of a fully constructed physical robot. The vision module is part of the platform's intelligent core, which conceptually and functionally unifies algorithms, modules, and interfaces that collectively enable environmental perception, information processing, and decision-making by the robot. The vision module integrates algorithms for object recognition, color classification, and QR code decoding, executing them in a logical sequence to ensure efficient processing of visual information. Furthermore, it serves as an entry point to a broader logical model within the intelligent core. The paper presents the role of the vision module in the platform, its connection with other modules, and the conceptual guidelines for future development.
- Research Article
- 10.55640/ijvsli-05-01-03
- May 9, 2025
- International journal of signal processing, embedded systems and VLSI design
- Vikas Nagaraj
With the architecture complexity of silicon in high-performance computing (HPC) and graphics processing units (GPUs) growing, reliability, scalability, and first-time-right silicon cannot be achieved without the introduction of advanced Design for Test (DFT) methodologies. This paper addresses the peculiarities of DFT magnetization to cope with the characteristics of HPC and GPU environment issues: massive parallelism, depth pipelining, multi-clock, power domains, and rising thermal and power density. It covers basic techniques, including scan-based testing, built-in self-test (BIST), logic BIST (LBIST), and a modular and hierarchical test planning framework. Additionally, the paper studies the related key infrastructural pieces, such as test access mechanisms (IJTAG, IEEE 1500), remote debug orchestration, and centralized test control units. Additionally, emerging trends like AI/ML-enabled ATPG, in-field telemetry, predictive maintenance, and DFT innovations in the contexts of chipset-based and 3D-integrated architecture alter the test requirements for the overall multi-die system. It provides best practices in early DFT planning, modular IP reuse, scan chain optimization, and power-aware test pattern generation to obtain high test coverage while maintaining silicon performance. This work presents actionable insights for high-yield silicon design and validation in the next-generation compute platform landscape. It is aimed at silicon architects, DFT engineers, and verification professionals.
- Research Article
- 10.52783/jisem.v10i45s.8903
- Apr 30, 2025
- Journal of Information Systems Engineering and Management
- Vikas Nagaraj
With rising energy demands on both sides of the memory and compute stack in high-performance computing (HPC), graphics, and artificial intelligence (AI) accelerators, high energy efficiency, low latency, and scalable semiconductor designs have become important. As these industries develop, well-thought-out conservation in the semiconductor architecture is necessary to pursue economic and environmental goals. This paper studies the role of low-power design verification in semiconductor architectures in terms of some key methodologies, such as Design for Test (DFT) and GPU hardware validation, respectively. The verification process verifies the performance, power, and functional requirements compulsory in low-power applications. It shows how powerful the DVFS, clock, and power gating techniques reduce power consumption. The scan chain insertion, built-in self-test (BIST), and boundary scan are required DFT methodologies to help identify early in the design cycle and before full design. These power inefficiencies would otherwise require a costly redesign once a portion of the design is available in production. Finally, the paper also talks about the role of GPU hardware validation in guaranteeing that AI accelerators work effectively with power limitations. The paper points out that following the integration of power-aware simulation tools and cooperation among multidisciplinary teams, low-power design verification can be useful in designing energy-efficient, high-performance semiconductor devices. The last study looks at future design verification of low-power technology, which will continue to enable energy-efficient semiconductor technology by integrating AI-driven tools and quantum computing. A contribution to understanding how verification is fundamental to realizing sustainable and optimized design for future semiconductors is made.
- Research Article
- 10.11113/elektrika.v24n1.636
- Apr 29, 2025
- ELEKTRIKA- Journal of Electrical Engineering
- Suhaila Isaak + 3 more
A 16 × 2 two-dimensional (2-D) array of single-photon avalanche diodes integrated in a complementary metal-oxide-semiconductor (CMOS) process is presented. Each pixel is made up of an avalanche photodiode biased in the so-called Geiger mode, a quenching resistor, and a basic comparator. To implement the photon counting verification on the chip, a build-in-self-test (BIST) module is added. Full integration allows for 10218.70 μm2 of total cell area and 1.7837 mW of power consumption, which is about 38% less than the 2.8567 mW of the prior design. The circuit is capable of running at a 200 MHz counting rate. The array's internal signals and status can be monitored while it is being tested or used thanks to the built-in logic block observation module demonstrated in this paper. The effectiveness and efficiency of testing can be increased, as well as the performance of the readout electronics, by adding Kogge-Stone adder and BIST circuits to the array design. The ability to quickly adapt the design to suit a particular application is another undeniable advantage of CMOS integration, which also paves the way for on-chip data processing.
- Research Article
- 10.1080/19393555.2025.2496329
- Apr 26, 2025
- Information Security Journal: A Global Perspective
- Kalamani Chinnappa Gounder + 3 more
ABSTRACT Built-in self-test (BIST) is essential for guaranteeing the dependability and efficiency of digital circuits. Linear feedback shift registers (LFSRs) are a common tool used in traditional BIST procedures to generate test patterns. However, non-linear register updation algorithms are being investigated as a result of BIST breakthroughs, which promise notable gains in fault coverage and security. Non-linear register updation involves updating shift register contents by employing non-linear functions or state machines instead of linear feedback mechanisms. Switching to non-linear approaches offers various benefits, as they can detect flaws overlooked by standard linear patterns by generating test patterns that explore diverse state spaces of the circuit. Improved fault coverage is vital for detecting minute circuit abnormalities affecting reliability. Non-linear register updating also adds complexity, enhancing BIST scheme security. It creates test patterns that are harder to manipulate or predict which lessens predictability and vulnerability to malicious assaults. This study examines modified BIST with non-linear register updation, focusing on applications, challenges, and key factors for efficient digital circuit testing. Engineers can ensure strong performance in crucial electronic systems by utilizing non-linear register updation to increase fault coverage, improve security, and boost reliability in digital circuit testing.
- Research Article
9
- 10.37547/tajet/volume07issue03-22
- Mar 27, 2025
- The American Journal of Engineering and Technology
- Vijayaprabhuvel Rajavel
This article addresses the issue of improving energy efficiency in the testing of system-on-chip (SoC) semiconductor systems, including heterogeneous computing cores and AI accelerators. An analysis of SoC architecture and existing Design for Testability (DFT) methodologies is presented, considering energy-saving techniques such as clock gating, power gating, and dynamic voltage and frequency scaling (DVFS). A literature review highlights the insufficient development of a comprehensive approach to reducing power consumption specifically during testing procedures. Practical examples, including Qualcomm Snapdragon, Apple A-Series, and Tesla FSD, demonstrate that integrating low-power techniques into DFT can significantly reduce energy consumption (by an average of 20–35%) without compromising test coverage quality. The proposed analysis confirms the effectiveness of combining traditional scan chains, built-in self-test (BIST), and boundary scan with power management mechanisms, contributing to reduced thermal loads and increased reliability of modern SoCs in mass production. The findings presented in this article will be of interest to leading researchers and practicing engineers in the fields of microelectronics, materials science, and energy optimization, aiming to integrate advanced testing methodologies with innovative energy-saving solutions to develop reliable, high-performance, and environmentally sustainable semiconductor systems.
- Research Article
- 10.36548/jei.2025.1.001
- Mar 1, 2025
- Journal of Electronics and Informatics
- Anjali R + 3 more
In modern digital systems, ensuring both high performance and reliability is essential, especially in fault-sensitive environments. This research introduces the design and implementation of a fault-tolerant Brent-Kung adder, integrated with an advanced Built-In Self-Test (BIST) framework. The Brent-Kung adder, known for its efficient carry propagation and speed optimization, is augmented with BIST techniques to enhance its reliability and testability in digital systems. A Linear Feedback Shift Register (LFSR) is used to produce pseudo-random test patterns, while a Multiple Input Signature Register (MISR) compresses the adder’s output into a compact signature for fault detection. The design is carried out in Verilog and synthesized using Xilinx Vivado 2019.1 to evaluate performance metrics, including area utilization, speed, and fault coverage. By combining the Brent-Kung adder's high-speed characteristics with a robust BIST framework, the research achieves an effective balance between performance and fault detection. This approach provides a promising solution for applications that require both computational efficiency and increased reliability in fault-sensitive environments.
- Research Article
4
- 10.1016/j.engappai.2024.109876
- Mar 1, 2025
- Engineering Applications of Artificial Intelligence
- Zhe Yang + 3 more
Optimization of Built-In Self-Test test chain configuration in 2.5D Integrated Circuits Using Constrained Multi-Objective Evolutionary Algorithm