Abstract

In this paper, we analyze the impact of characteristic impedance variations among standards on the accuracy of the thru-reflect-line (TRL) calibration technique. The impedance transformer method is adopted to derive the expressions of the calibration coefficient errors due to the manufacturing tolerances. It is found that three factors can affect the magnitude of the errors in the calibration coefficients (c/a and b terms), which are crucial to get the final calibrated results. The first factor is related to the original parameters of the error networks (test fixtures): the larger the insertion losses, the smaller the error in b ; the error in c/a may see an opposite trend if the error network is lossy instead of lossless. The second factor is denoted as the phase contribution (one of the three multipliers of the derived error expression): the magnitude of this error contributing item is approximately equal to the ratio of two hyperbolic sine functions, the variables of which are the length of Line and the length difference between Line and Thru, respectively. The third factor comes from the impedance differences between Thru and Line: the smaller the impedance variation, the smaller the error. The error analysis, presented here, can help engineers evaluate the calibration accuracy by analyzing the error contributing items. It also can be further used to guide test fixture designs to maximize TRL's error immunity to the transmission line characteristic impedance variations.

Full Text
Published version (Free)

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call