Standard RF power measuring systems used for testing RF equipment during manufacture, installation and maintenance comprise a meter unit coupled to a sensor unit by a cable. Typically, the sensor unit comprises a sensing element that converts RF power to an equivalent voltage or current signal. The sensing element is typically a diode or a thermocouple. The voltage or current signal can be amplified and conditioned, for example, so that the meter unit can provide an indication of power.
In order to ensure measurement traceabilty to national standards, the above type of RF power measurement system needs to be calibrated. In this respect, there are two types of calibration procedures commonly performed: factory calibration and user calibration.
Factory calibration is either performed during manufacture or at periodic calibration intervals (typically one year) when the meter unit, the sensor unit and the cable are returned to the factory for calibration. The sensor unit is calibrated against a transfer standard at a number of frequency points to determine calibration factors for the sensor unit. A calibration factor represents the combination of efficiency of the sensor element and mismatch loss (a measure of coupling efficiency of a source of RF power with an input of the sensor unit) at a particular frequency, and is applied after measurement, by the meter unit as a correction to the measurement. The meter unit is also calibrated in order to maintain linearity of measurements by the system, due to the sensing element being a non-linear device. Additionally, the meter unit is usually provided with a reference power source, the reference power source being set using a transfer standard.
Whilst the factory calibration is needed, it is not uncommon for a different meter unit/cable/sensor unit combination to be used throughout the working life of the meter unit. Consequently, a need exists to calibrate for a particular meter unit/cable/sensor unit combination and this is known to be done by a user of the meter unit.
During normal use, the sensor unit is attached to the meter unit by the cable. Offsets present in readings given by the meter unit when no input RF signal is present are then removed from the measurement system by an offset removal procedure automated by the meter unit (normally referred to as zeroing or auto-zero). The meter unit and sensor unit gains are considered as a single “path gain” and so the system can measure absolute power by disconnecting the sensor unit from a device under test, attaching the sensor unit to the reference source and performing an automated calibration procedure provided with the meter unit. Since the meter unit knows the power of the reference source being applied, a gain factor of the meter unit can be adjusted by the meter unit to ensure that the correct value of the power of the reference source, taking into account the path gain, is displayed. The sensor unit is then reattached to the device under test and absolute power of a signal being generated by the device under test can be measured and displayed by the meter unit. If the frequency of the measured signal is different from the frequency of the reference source, then the calibration factor of the sensor unit for the frequency of the measured signal should be manually entered into the meter unit, from hard-copy information provided with the power measuring system, to ensure the meter unit displays the correct power.
Over time, the sophistication of power measurement systems has increased to the point now that it is possible to store the calibration factors in Electronically Erasable Programmable Read Only Memories (EEPROMs) resident in the sensor unit and readable by the meter unit. Additionally, sensing elements of the diode type can now be operated beyond an inherent operating range where power is converted to current or voltage in a linear manner. The ability to operate the diode in this way is made possible by the provision of a further stage in the factory calibration procedure of the sensor unit. The further stage requires the sensor unit to be subjected to a number of known RF power levels, typically at the same frequency of the reference source, and the response of the sensor unit is measured and stored in the EEPROM. The meter unit then uses this data when the sensor unit is connected to a device under test to produce a correction function that linearises the output of the sensor unit, and thus enables wider dynamic range measurements. The calibration data including the above-mentioned calibration factors relating to the sensor unit are stored in the EEPROM as data expressed in relative terms with respect to the measured output power of the reference source, so that linearity and frequency calibration of the sensor unit by the user can take place.
An additional advance in this field is the provision of a temperature sensing element within the sensor unit. Since the current flowing through the sensing element can be influenced by the temperature of the sensing element, by measuring the temperature of the sensing element, it is possible to apply a correction for variations in temperature in respect of power measurements made using the sensor unit. Some factory calibration procedures for sensor units involve the measurement of temperature variations with respect to the sensor unit, such data also being stored in the EEPROM.
With respect to the user calibration procedure, as mentioned above, the user must regularly disconnect the sensor unit from the device under test and connect the sensor unit to the reference source of the meter unit. After calibration, the user must then reconnect the sensor unit to the device under test. During this partly manual procedure, measurement errors can arise if the reference source is not mechanically connected efficiently to the sensor unit. Further, the user calibration procedure is inconvenient and relies on an RF source, with associated mismatch uncertainties, to measure what is predominantly a DC circuit (referred to here as the path gain calibration). Additionally, during the user calibration procedure, it is assumed that the extensive characterisation implemented during the factory calibration procedure is still valid, and so the user calibration procedure is, effectively, only testing the general integrity of the power measuring system and the path gain characterisation from the output of the sensor unit to acquisition circuitry of the meter unit.
As a way of mitigating the above-described disadvantages, alternative designs for power measurement systems have been developed. These designs employ so-called ‘no-cal’ solutions that remove the need for the user to perform the disconnections and reconnections of the user calibration procedure. Consequently, in an attempt to reduce the number of variable components making up the power measurement system, one known type of power measurement system has the cable permanently coupled to a characterised sensor unit, thereby reducing the number of independent component combinations constituting the power measurement system.
It therefore follows that the above ‘no-cal’ solution for power measurement systems is not capable of allowing changes of cables by the user, as may occur when a cable supplied at manufacture subsequently becomes damaged. Some other no-cal solutions restrict the ability of the user to change the cable. Also, such solutions do not allow for internal components that experience wear or ageing, thereby compromising the accuracy of the power measurement. Thus, known power measurement systems and associated calibration techniques are not entirely satisfactory from the standpoint of accuracy or execution. A need therefore exists for a power measurement system that makes more accurate measurements than the traditional approach yet is without the intrusive requirement of disconnecting and connecting the sensor unit from and to the device under test.