Abstract

The GigaBit Transceiver (GBT) [1] system has been developed to replace the Timing, Trigger and Control (TTC) system [2], currently used by LHC, as well as to provide data transmission between on-detector and off-detector components in future sLHC detectors. A VHDL version of the GBT-SERDES, designed for FPGAs, was released in March 2010 as a GBT-FPGA Starter Kit for future GBT users and for off-detector GBT implementation [3]. This code was optimized for resource utilization [4], as the GBT protocol is very demanding. It was not, however, optimized for latency — which will be a critical parameter when used in the trigger path. The GBT-FPGA Starter Kit firmware was first analyzed in terms of latency by looking at the separate components of the VHDL version. Once the parts which contribute most to the latency were identified and modified, two possible optimizations were chosen, resulting in a latency reduced by a factor of three. The modifications were also analyzed in terms of logic utilization. The latency optimization results were compared with measurement results from a Virtex 6 ML605 development board [5] equipped with a XC6VLX240T with speedgrade-1 and the package FF1156. Bit error rate tests were also performed to ensure an error free operation. The two final optimizations were analyzed for utilization and compared with the original code, distributed in the Starter Kit.

Highlights

  • The GigaBit Transceiver (GBT) [1] system has been developed to replace the Timing, Trigger and Control (TTC) system [2], currently used by LHC, as well as to provide data transmission between on-detector and off-detector components in future sLHC detectors

  • The test system consisted of a Pseudo Random Number Generator (PRNG) to generate a test pattern, a GBT-transmitter, a GBT-receiver, one Gigabit Transceiver and a comparator, all contained in a Virtex 6 FPGA

  • One of these was situated on the ML605 board which was used for the test system while the other was situated on a Xilinx ML507 evaluation board [7]

Read more

Summary

Latency optimization

Our aim was to minimize the latency contribution from every single component within the chain. The largest latency (seven 40MHz cycles) comes from multiplexing the data from 40MHz to 120MHz (Mux) As described previously this is realized with a dual port memory module, where optimizing the corresponding control-unit can improve the timing performance significantly, leading to two 40MHz cycles (Opt. 1). To decrease this latency to one 40MHz cycle the Mux was replaced with a register controlled by a finite state machine working at 120MHz (Opt. 2). Three 40MHz cycles are contributed from the modified GBT transmitter code and three 40MHz cycles plus one 120MHz cycle from the modified GBT receiver code This optimization does not give the shortest latency but should save logic resources since less registers are used.

Utilization studies
Test design and measurements
Conclusion
Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call