The invention relates to electronic test equipment. In particular, the present invention relates to calibration of electronic test equipment systems such as vector network analyzers.
Test systems are critical to the manufacture and maintenance of modem electronic devices and systems. A variety of test systems are routinely employed such as scalar and vector network analyzers, spectrum analyzers, and power meters. Most of these systems provide for calibrating the test system. The calibration process attempts to mitigate or remove the effects of the test system imperfections from the measurements of a device under test (DUT). Typically, calibration involves using the test system to measure the performance of so-called calibration standards having known performance characteristics. The results of these measurements are then used to extract and remove measurement errors associated with the imperfections of the test system. To better understand the concept of calibration with respect to test systems consider a network analyzer, its error sources, and the calibration process used to remove the effects of the error sources.
A network analyzer characterizes the performance of RF and microwave devices under test (DUT) in terms of network scattering parameters. Scattering parameters, more commonly called xe2x80x98S-parametersxe2x80x99, are reflection and transmission coefficients computed from measurements of voltage waves incident upon and reflected from a port or ports of a DUT. In general, S-parameters are given either in terms of a magnitude and phase or in an equivalent form as a complex number having a real part and an imaginary part. A network analyzer capable of measuring both the phase and magnitude of the S-parameters of the DUT is called a vector network analyzer.
A vector network analyzer exhibits random errors and systematic errors during the measurement of a DUT. Random errors are primarily due to system noise sources including phase and amplitude noise of the stimulation source, receiver noise, and sampler noise. Random errors vary randomly as a function of time and, in most cases, cannot be removed by calibration, but may be minimized by averaging repeated measurements. Systematic errors are repeatable, non-random errors associated with imperfections in or non-ideal performance of the network analyzer and test setup being used. Moreover, systematic errors either do not vary with time or vary only slowly with time. Therefore, the effect of systematic errors on measured S-parameter data for a DUT can be minimized or eliminated through the use of network analyzer calibration. Essentially, network analyzer calibration involves determining correction factors or coefficients associated with an error model for the measurement system. Once determined, the correction factors are used to mathematically remove the effect of the systematic errors from the measured S-parameters for the DUT.
Six types of errors account for the major systematic error terms associated with a vector network analyzer measurement of the S-parameters. The six systematic errors are directivity and crosstalk related to signal leakage, source and load impedance mismatches related to reflections, and frequency response errors caused by reflection and transmission tracking within test receivers of the network analyzer. For a general two-port DUT, there are six forward-error terms and six reverse-error terms (six terms for each of the two ports of the DUT), for a total of twelve error terms. Therefore, a full measurement calibration for a general two-port DUT is often referred to as a twelve-term error correction or calibration.
Calibration standards are typically used in a measurement calibration to measure and quantify the error terms. Once determined, the error terms are used to compute correction factors or correction terms for use in the network analyzer calibration. A calibration standard is a precision device for which the S-parameters are known with sufficiently high accuracy to accomplish the calibration. That is, the accuracy of the calibration is directly related to the accuracy of the knowledge of the S-parameters of the calibration standard.
The known S-parameters of the calibration standard are used to compute a set of calibration coefficients that are incorporated into the network analyzer error model. Then, by making measurements of several different known calibration standards with the network analyzer it is possible to develop and solve a set of linear equations for a set of correction factors. The correction factors in conjunction with the calibration coefficients allow corrected S-parameter data to be reported by the xe2x80x98calibratedxe2x80x99 network analyzer. In general, as long as there are more equations (i.e. measurements of known calibration standards) than there are unknown error terms in the error model, the correction factors associated with the error terms can be determined uniquely. For example in the case of a twelve-term correction for a two-port DUT, four calibration standards consisting of a short circuit (xe2x80x98shortxe2x80x99), an open circuit (xe2x80x98openxe2x80x99), a load, and a through (xe2x80x98thruxe2x80x99) can be used to completely and uniquely determine the correction factors associated with each of the terms of the twelve-term error model. The use of short, open, load and thru standards is referred to as a SOLT calibration set. Another example of a popular calibration model used to develop a twelve-term correction is a thru-reflect-line (TRL) calibration.
Unfortunately, it is not always convenient or even possible to construct a set of calibration standards, the S-parameters of which are known with sufficient accuracy over a frequency range of interest for calibration purposes. An example of a such situation where constructing known calibration standards is difficult is testing a DUT that must be mounted in a test fixture as opposed to being connected directly to a coaxial cable or cables attached to the network analyzer. In addition to the problem of constructing and measuring calibration standards for these so-called xe2x80x98in-fixturexe2x80x99 measurements, repeatability of the calibration can also be a concern since it may not be possible to insert calibration standards into the fixture in a manner that is sufficiently repeatable, leading to unaccounted for and thus uncalibrated errors in the measurements.
Accordingly, it would be advantageous to calibrate a test system without relying on using known calibration standards. Furthermore, it would be desirable for such a calibration to enable the testing of a DUT in a test fixture without concern for the repeatability of calibration standard insertion. Such a calibration would solve a long-standing need in the area of calibrated test systems using calibration standards.
The present invention is a method and system for calibrating test equipment using a standards-based calibration that facilitates accurate measurements, and a vector network analyzer employing a standards-based calibration. The present invention works well even when a test fixture is used with the test equipment, where the test fixture facilitates xe2x80x98in-fixturexe2x80x99 measurements on a device under test (DUT). The method of calibrating is a xe2x80x98standards-basedxe2x80x99 calibration method in which a set of calibration standards, wherein at least one calibration standard of the set has an unknown performance, is used. The system and vector network analyzer of the present invention also are based on this standards-based calibration. The present invention utilizes measurements of calibration standards to correct imperfections in measurements of the DUT performance due to the test system. The present invention further can correct for the effects of the test fixture in DUT measurements. According to the present invention, simulation models of the unknown calibration standards are selected and actual measurements of the unknown calibration standards are used to extract parameter values for constituent elements of the models. Once the element parameter values are extracted, the parameterized models provide an accurate characterization of the performance of the calibration standards over a broad frequency range. The present invention is applicable to any standards-based calibration of test systems.
In one aspect of the present invention, a method of calibrating a test system for testing a device under test is provided. The method comprises the step of selecting respective models for a set of calibration standards. Each of the models defines a constituent element for which element values must be specified to fully characterize the associated calibration standard. A performance characteristic of at least one of the calibration standards is initially unknown making it an xe2x80x98unknownxe2x80x99 standard. The performance characteristics of the unknown calibration standards may be either completely unknown or just poorly known. Additionally, at least one of the element values for calibration standard models is initially unknown. The method further comprises the step of performing measurements using each of the calibration standards so that the measurements collectively are sufficient to determine the initially unknown element value. The method still further comprises the step of calibrating the test system as a function of the measurements. Each calibration standard is attached in turn to the test system for the measurements.
In one form, a test system Yes a test fixture in the step of performing measurements. In this embodiment, each calibration standard is attached turn to the test system via a test fixture for the measurements. The test fixture has an associated fixture model having an associated constituent element for which fixture element values must be specified to fully characterize the test fixture. At least one of the fixture element values is initially unknown. The measurements obtained in the performing step are sufficient to determine the initially unknown standard value as well as the unknown fixture value.
In another aspect of the invention, a method of calibrating a test system using a standards-based calibration is provided. The method of calibrating comprises the step of measuring performance data of a set of calibration standards using the test system over a broad frequency range, wherein at least one of the calibration standards in the set is an unknown standard. The method further comprises the step of selecting a computer model for the unknown calibration standard, wherein the model has a constituent element that has at least one unknown element value. The method of calibrating further comprises optimizing the computer model by adjusting the unknown element value of the constituent element. The adjustment is a function of simulated performance data for the model and the actual measured performance data for the unknown calibration standard. The adjustment is such that the simulated performance data of the unknown calibration standard model agrees with the respective actual measured performance data. The optimized model is used along with the measured performance data of the set of calibration standards to calibrate the test system.
In one embodiment, a test fixture is used by the test equipment for testing a device under test (DUT). The set of calibration standards are each in turn connected to the test fixture for performing measurements for different combinations of the test fixture and each calibration standard of the set in the step of measuring. In this embodiment, the step of selecting a computer model comprises further selecting a computer model for the test fixture, where the test fixture model has an associated constituent element with at least one unknown element value. Moreover, the step of optimizing further comprises optimizing the test fixture model by adjusting the element value of the associated constituent element of the test fixture model such that simulated data for the different combinations of the test fixture model and each calibration standard of the set agrees with the actual measured data.
Once optimized, the models can be used directly to generate an error array for correcting the measurement of a DUT. Alternatively, the test fixture may be de-embedded from the measurement and a calibrated measurement of the DUT performed. Moreover, the models of the test fixture and calibration standards can be separated from one another and used independently. In particular, once the optimized models of the calibration standards are available, the calibration standards can be used in the calibration of other test fixtures as xe2x80x98knownxe2x80x99 standards using conventional calibration methods.
In another aspect of the invention, a method of determining a calibration coefficient for an unknown calibration standard from a set of calibration standards used for standards-based calibration of a test system is provided. The set of calibration standards used has at least one unknown calibration standard. Advantageously, the method of determining a calibration coefficient of the present invention may use unknown calibration standards that are adapted for use in a test fixture. The method of determining does not rely on the use of precision calibration standards with accurately known performance characteristics.
The method of determining calibration coefficients comprises the step of selecting a computer model for the unknown calibration standard, where the calibration standard model has a constituent element with at least one unknown element value. The method of determining further comprises the step of measuring performance data of each calibration standard of the set over or within a broad frequency range. The method of determining further comprises the step of optimizing the computer model of the unknown calibration standard by adjusting the unknown element value of the constituent element until simulated data for the unknown calibration standard agrees with respective measured performance data. The method of determining further comprises the step of extracting a calibration coefficient for the unknown standard from the optimized model of the unknown standard.
Preferably, prior to the application of the method of calibrating and the method of determining calibration coefficients of the present invention, a calibration of the test system is performed using conventional known calibration standards and according to procedures provided by the manufacturer of the test system. This conventional calibration, while not necessary, improves the results of the method of calibrating and the method of determining calibration coefficients of the present invention by mitigating the effects of measurement errors associated with the test system and cabling between the test system and any test fixture.
In another aspect of the invention, a system for calibrating test equipment using a standards-based calibration is provided. The system comprises a set of calibration standards, where at least one standard of the set is an unknown calibration standard. Each calibration standard of the set is temporarily interfaced to the test equipment for respective measurements. The system further comprises a computer or other processor/controller interfaced with the test equipment. The computer implements a computer program that provides a modeling environment and a model optimization. The modeling environment comprises a computer model for each unknown calibration standard of the set and simulated data for each model. The model optimization comprises an optimized model for each of the computer models. Each of the optimized models is a function of the respective measurements and the simulated data for the respective model. The system may further comprise the test equipment, and the test equipment may further comprise a test fixture to which each calibration standard of the set in turn is attached for measurements.
The system for calibrating is applicable to any test equipment that utilizes a standards-based calibration. In particular, the system for calibrating is applicable to scalar network analyzers, vector network analyzers (VNAs), impedance analyzers, power meters, and spectrum analyzers. The system employs the method of calibrating of the present invention.
In yet another aspect of the present invention, a vector network analyzer having a standards-based calibration using a set of calibration standards where at least one standard is unknown is provided. The vector network analyzer comprises a vector network analyzer portion having an analyzer port, and a controller that controls the operation of the analyzer portion. The controller implements a calibration computer program that comprises a modeling environment and a model optimization. The modeling environment comprises a computer model for each unknown calibration standard of the set of calibration standards, and simulated data for each of the computer models. The model optimization comprises an optimized model for each of the computer models. Each optimized model comprises an adjustment that is a function of the simulated data for the respective model and actual measurement data of the respective unknown calibration standard taken by the analyzer portion at the analyzer port.
Unexpectedly and advantageously, the calibration method, system and vector network analyzer of the present invention do not require the use of precision calibration standards having accurately known characteristics or parameters, as do conventional standards-based calibrations. Instead the method may use a set of so-called xe2x80x98unknownxe2x80x99 calibration standards, calibration standards having unknown or poorly known parameters. Thus, lower cost standards may be used for calibration according to the present invention. Moreover, since unknown standards may be used, the accuracy and ease of performing calibration for in-fixture measurements is greatly enhanced by the present invention. In addition, parameterized models of the test fixture and the unknown calibration standards, once developed, can be used independently of each other in subsequent calibrations according to the present invention. While the methods and system are described in detail with respect to xe2x80x98in-fixturexe2x80x99 measurements below, the test fixture may be an actual test fixture, a simple fixture such as a connector adaptor, or even a null fixture (i.e. a fixture with no loss, no parasitics, and no electrical length). As such, where the test fixture is essentially a null fixture, the methods can produce and the system and the analyzer each can use models of just the unknown calibration standards. These and other features and advantages of the invention are detailed below with reference to the following drawings.