Many determination apparatuses have imperfections, including deviations between the determined values of these apparatuses and the actual values. Therefore, determination apparatuses are produced such that the determination apparatus is calibrated prior to determination of a device under test (DUT hereinafter), so that the effects of systematic error can be removed from the determined value. Systematic error is hereinafter referred to as error and error coefficient.
A conventional calibration method and its procedure will now be described using a network analyzer as an example of the determination apparatus.
FIG. 1 is a schematic drawing of a two-port network analyzer 100. Network analyzer 100 comprises a CPU 120, a memory 130, which is an example of a memory, an input device 140, a determination device 150, and a display 160, which is an example of an output means. Furthermore, these structural elements are connected together by a bus 110. CPU 120 exchanges data with memory 130, input device 140, determination part 150, or display part 160 and processes the data as needed. Memory 130 stores information on settings for network analyzer 100, determined values obtained by determination device 150, etc. Input device 140 receives commands from outside network analyzer 100. Determination device 150 has a port A and a port B that are the determination terminals, and determines the incident signal power and reflected signal power. Furthermore, the incident signals are output signals at the ports and the reflected signals are input signals at the ports.
A two-port device is connected between port A and port B of this type of network analyzer 100. When the forward and backward network properties of this device are determined, there are 12 systematic errors present related to signal leak, signal reflection, and frequency response.
The TRL (Through-Reflect-Line) calibration method is one calibration method whereby the effect of these errors is removed from the determined values. The TRL calibration method is characterized in that three types of calibration reference standards are used through a reflect, and a line standard. It is possible to remove the effects of ten errors with two-port calibration, and it can be used for non-coaxial environments and on-wafer determinations. Furthermore, the reflect reference standard is either an open reference standard or a short reference standard.
FIG. 2 is a flow chart showing the procedure of the TRL calibration method as conducted with network analyzer 100. In step P21, data from input device 140 are received and property values are set for the calibration reference standard that will be used in calibration. Incidentally, both port A and port B are used to determine the calibration reference standards. In step P22, determination part 150 determines the calibration reference standard connected to port A and port B and CPU 120 receives these determined values from determination device 150 and stores them in memory 130.
By means of the TRL calibration method, it is necessary to determine 12 parameters of the calibration reference standard in the case of two-port calibration. Consequently, processing is performed in step P23 to decide whether or not all of these determined values have been obtained. Step P22 is repeated until all determined values are obtained.
Moreover, in step P24, CPU 120 references the determined values of 14 parameters, finds the values of 10 errors, and stores these error values in memory 130 as error coefficients. Finally, the determined values of the 14 parameters stored in the memory are erased. Network analyzer 100 can output determined values of the DUT, wherein the effects of the errors found have been removed (i.e., the corrected determined values) after calibration by the above-mentioned procedure.
However, there are cases in which it is impossible to accurately find the error due to poor connection of calibration reference standards or mis-connection of calibration reference standards attributed to the number of determination terminals, etc., after calibration has been performed and the calibration reference standard must therefore be re-determined. Whether or not the error has been accurately found can be confirmed by observing the corrected determined values. However, according to the conventional procedure, the corrected determined values can only be observed after calibration. Once calibration has been performed, all of the determined values for the parameters of the calibration reference standards are erased and the specific parameter values have to be re-determined to confirm accuracy thereof, resulting in poor calibration work efficiency. In recent years, the number of ports in the determination devices and DUTs has increased and as a result, the time for re-calibration has increased. This problem is significant.