Abstract

High-frequency characterization of active and passive devices is carried out by extracting the scattering parameters of the component (often in a two-port configuration) employing a vector network analyzer (VNA). This class of instruments allows to characterize the response of the device under test (DUT) over a broad frequency range (exceeding 1 THz [Dio17]) at a user-defined reference plane. In order to define such reference planes and remove all the imperfections of the measurement setup (i.e., cable and receiver conversion losses, amplitude and phase tracking errors, and other statistical errors), a calibration procedure [Ryt01] needs to be carried out prior to the measurement. The calibration procedure employs the knowledge of the devices used (i.e., standards) to solve the unknowns representing the measurement setup response (often referred to as error terms). The derived error terms allow then to remove the imperfections of the setup, during the measurement procedure. The accuracy of the calibration is then directly dependent on the accuracy with which the standards are known [Stu09]. In the literature, different calibration techniques have been presented, often trading off (more) knowledge on the response of the standard device for (lower) space occupancy (i.e., when considering SOLR/LRM [Fer92; Dav90] calibrations versus TRL type ones [Eng79]). Traditionally, calibration techniques requiring little standards knowledge (e.g., TRL, LRL) have been considered the most accurate, with TRL reaching metrology institute precision, by only requiring the information of the characteristic impedance of the line [Eng79]. In this chapter the focus will be placed only on TRL calibration techniques due to their best compatibility with millimeter-and sub-millimeter-wave characterization. For a more extensive discussion on the various possible calibration techniques the reader is referred to [Tep13]. Calibration techniques for on-wafer measurements typically consist of a probe-level calibration (first-tier) performed on a low-loss substrate (i.e., alumina or fused silica) [Eng79; Eul88; Dav90; Mar91a]. This probe-level calibration is then transferred to the environment where the DUT is embedded in and often, to increase the measurement accuracy, this calibration is augmented with a second-tier on-wafer calibration/de-embedding step. This allows moving the reference plane as close as possible to the DUT, by removing the parasitics associated with the contact pads, the device-access lines, and the vias [Tie05]. In this chapter we will first review the challenges and potential solutions associated with first-tier calibrations performed on low-loss substrates, then the approach to design calibration kits integrated in the back-end-of-line of silicon based technology will be presented, and finally a direct de-embedding/calibration strategy, capable of setting the reference plane at the lower metal layer of a technology stack, will be described.

Full Text
Paper version not known

Talk to us

Join us for a 30 min session where you can share your feedback and ask us any queries you have

Schedule a call

Disclaimer: All third-party content on this website/platform is and will remain the property of their respective owners and is provided on "as is" basis without any warranties, express or implied. Use of third-party content does not indicate any affiliation, sponsorship with or endorsement by them. Any references to third-party content is to identify the corresponding services and shall be considered fair use under The CopyrightLaw.