Abstract

A time-interleaved analog-to-digital converter (TI-ADC) uses several sub-analog-to-digital converters (sub-ADCs) to achieve a high sampling rate. Its applications include communication systems, oscilloscopes, healthcare instruments, etc. However, the presence of sub-ADC channel mismatches such as offset, gain and sample-time mismatches can significantly degrade the performance of TI-ADCs. This paper proposes a fully digital foreground calibration technique for the TI-ADC, including mismatch correction and estimation blocks. Unlike existing techniques, the proposed technique requires low computational resources. As the resource for complex computing in a field-programmable gate array (FPGA) is mainly the digital signal processor (DSP), the mismatch correction block unifies each computing unit in the FPGA into a multiply adder to achieve low DSP consumption. To further reduce the use of DSP resources, the selection of an optimal number of taps of the finite impulse response filter used for mismatch correction is discussed. The optimal-tap filter that uses the lowest amount of DSPs while satisfying the required signal-to-noise and distortion ratio flat area bandwidth is presented. A real-time hardware correction block was implemented for an 8 Gs/s TI-ADC system with two sub-ADC channels, and the results with this hardware verify the low computing resource consumption feature of the proposed calibration technique.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call