This paper presents a novel background calibration method for pipelined analog-to-digital converters (ADCs) using a time-delay neural network (TDNN), which is optimized through genetic algorithm (GA) techniques. The proposed technique leverages TDNN to create enhanced feature sets, significantly improving the calibration of nonlinear errors exhibiting memory effects. It harnesses the GA's global optimization capabilities for feature selection, effectively reducing the feature dimension and consequently alleviating the NN's computational burden. A parallel pipeline architecture is devised for the calibration circuit, with its implementation realized on FPGA to facilitate forward inference processing. The inference circuit is synthesized using TSMC's 90 nm CMOS process, achieving a power consumption of 40.11 mW and an area of 0.45 mm2. Simulations based on MATLAB for a 14-bit Pipelined ADC demonstrate that the proposed calibration method significantly improves the SFDR from 59.77 dB to 165.52 dB, and ENOB from 8.79 bits to 19.23 bits, surpassing the target ADC's specifications. Moreover, the dimensionality of features is effectively reduced by up to 34 % without compromising the calibration performance.
Read full abstract