Calibration of models and data structures is recurring in a large number of cross-cutting applications from finance to engineering. Even though there are numerous and well-established specific calibration techniques for each application sector, using Neural Networks (NNs) can improve performance. For instance, Tapped Delay-Line Time-to-Digital Converters (TDL-TDCs) implemented in Field Programmable Gate Arrays (FPGAs) are increasingly being used in a variety of research applications, such in the time-resolved spectroscopy or in medical imaging mainly for their high-precision and flexibility. Specific decoding on the sampled information from the TDL, together with calibration to compensate for non-idealities, (i.e., Bubble Errors, BEs, and Process–Voltage–Temperature fluctuations, PVTs) are carried out for generating the conversion of digital codes to time units. In this fundamental process, the impact of Machine Learning (ML) usage has not yet been investigated. In this paper, focusing on advanced FPGA devices (i.e., 28-nm, 20-nm, and 16-nm), we propose an approach based on NNs running in Python on a standalone PC to identify the optimal conversion from digital codes to timestamps, comparing it with the classical fully FPGA-based solution c literature. The experimental validations are performed on Artix-7 (XC7A100TFG256-2) and Kintex UltraScale (XCKU040-FFVA1156-2-E) in 28-nm and 20-nm technology nodes, achieving precision of 12.9 ps r.m.s. and 4.85 ps r.m.s., respectively. These results are in line with the state-of-the-art, demonstrating that in 28-nm technology, the bubble compression algorithm is sufficient to achieve high-precision, while reordering mechanism is crucial to compensate for BEs within the 16/20-nm technology node.
Read full abstract