Abstract

The paper is focused on design of time-to-digital converters based on successive approximation (SA-TDCs—Successive Approximation TDCs) using binary-scaled delay lines in the feedforward architecture. The aim of the paper is to provide a tutorial on successive approximation TDCs (SA-TDCs) on the one hand, and to make the contribution to optimization of SA-TDC design on the other. The proposed design optimization consists essentially in reduction of circuit complexity and die area, as well as in improving converter performance. The main paper contribution is the concept of reducing SA-TDC complexity by removing one of two sets of delay lines in the feedforward architecture at the price of simple output decoding. For 12 bits of resolution, the complexity reduction is close to 50%. Furthermore, the paper presents the implementation of 8-bit SA-TDC in 180 nm CMOS technology with a quantization step 25 ps obtained by asymmetrical design of pair of inverters and symmetrized multiplexer control.

Highlights

  • Design of modern integrated circuit is driven mainly by downscaling of complementary metal oxide semiconductor (CMOS) technology

  • This paper focuses on time-to-digital converters based on successive approximation (SA-TDCs—Successive Approximation TDCs) [22–37]

  • The reduction of successive approximation TDCs (SA-TDCs) complexity between the basic feedforward architecture with two sets of SA-TDC complexity the basic architecture with two sets delay lines, and the of feedforward architecturebetween with single set of feedforward delay lines and output decoding, can be of delay lines, and the feedforward architecture with single set of delay lines and output decoding, evaluated by comparison of the number of transistors used to build both version of the converter

Read more

Summary

Introduction

Design of modern integrated circuit is driven mainly by downscaling of complementary metal oxide semiconductor (CMOS) technology. The design of analog and mixed-signal circuits becomes more and more challenging because a reduction of transistor dimensions implies a decrease of the supply voltage. While older CMOS technologies utilized high supply voltages (from 15 V to 2.5 V), below the 100 nm technology feature size, the maximum operating voltage is near or below 1 V. This makes the fine quantization of the amplitude increasingly difficult. According to the fundamental law of MOS physics, the intrinsic gain of a single MOS transistor (gm /gds ) decreases with lowering the supply voltage [1,2]

Objectives
Methods
Results
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.