An X-ray analyzer, such as an X-ray fluorescence (XRF) or X-ray diffraction (XRD) instrument generally comprises an X-ray source, an X-ray detector and associated electronics. The X-ray detector is usually energy dispersive, with each incident X-ray producing an electronic signal whose charge is proportional to the energy of the X-ray. The detector electronics is designed to amplify each signal so that it becomes large enough to accurately measure the charge corresponding to the X-ray energy. The amplified signals are subsequently digitized and the digital values are used to construct an X-ray spectrum. Provided the gain of the entire electronic amplification and digitization system remains constant, the digital value of the amplified pulse is proportional to the energy of the associated X-ray, and with suitable calibration the X-ray energy can be determined. Knowing the energy of each X-ray, the signals from multiple X-rays striking the detector can be converted into a spectrum, which is a plot of X-ray energies vs the number of X-rays received with that energy. Such a spectrum exhibits peaks at energies which correspond to the characteristic X-ray energies of elements within the sample being measured. The position, magnitude and width of the peaks are critical parameters enabling identification of the elements in the sample and determination of their concentration.
In order to ensure that test results are accurate and repeatable, it is important to avoid electronic drift of signals from the detector. Signal drift results in X-rays of the same energy being assigned a different energy in the spectrum at different measurement times. The signal drift may cause misidentification of elements and/or errors in measurement of their concentration.
Drift of the gain of the electronic amplification and digitization system is a major source of signal drift. The drift may be due to instability of any of the components of the electronic system. For example, it is well known that the properties of electronic components are sensitive to temperature, and this temperature sensitivity can be particularly important for a compact, hand-held XRF instrument whose temperature may rise significantly from a cold start during the course of a long measurement or series of measurements. The temperature change results in variable electronic gain which causes drift in the energy scale of measured X-ray spectra. Energy scale drift includes drift during a single measurement, drift of the energy scale between different measurements on the same instrument, and inconsistent measurements of the same or similar sample made on different instruments.
One solution to the problem of energy scale drift in existing practice is to perform frequent manual calibrations. Energy scale calibration may be achieved by exposing the X-ray detector to X-rays of known energy, either using X-rays emitted from a radioactive source, or using secondary X-rays emitted from a known target material. In one example from existing practice, the energy scale is re-calibrated every few hours using Fe and Mo characteristic X-rays from a stainless steel sample containing both elements. However, irrespective of the calibration method used in existing practice, useful operation of the X-ray instrument must be interrupted, which is inconvenient and is therefore often neglected by operators. In the case of a handheld instrument, the instrument must usually be manually inserted into a docking station containing a known target material. The known energy of X-ray peaks from the target is compared with the measured energy in order to calibrate the gain. Since frequent manual calibration is inconvenient, the time between successive calibrations can be many hours, during which time significant temperature change and consequent energy drift may occur, causing degradation of the XRF measurement accuracy.
There is therefore a need in existing practice for a calibration method which is automatic and fast, causing minimal or no interruption to normal operation of the measuring device. The calibration method should be programmable to occur either after each measurement or continuously during the course of all measurements. In addition, the calibration method should encompass the entire electronic amplification and digitization system.
Another problem in existing practice is that the determination of X-ray energy in the amplified and digitized signal is subject to non-linearity in the amplification and digitization components. The primary effect of non-linearity is that the system gain varies with the amplitude of the signal. This problem is especially severe when, as is usually the case, a charge-sensitive pre-amplifier is used as part of the amplification of detector signals. A charge-sensitive amplifier has the property that its output voltage rises approximately as a step-function in response to input of the charge from an incident X-ray. The output voltage continues to rise to higher and higher voltage levels in response to subsequent X-ray signals, with the height of each voltage step being proportional to the energy of the corresponding X-ray. The output voltage continues to rise until an upper voltage threshold is reached and an external reset signal is applied to return the output voltage to zero or a lower voltage threshold. The problem with non-linearity arises because an X-ray of given energy may arrive when the pre-amplifier output voltage is at any level between the lower and upper thresholds, and non-linearity of the subsequent amplification and digitization system causes different energy to be assigned to the X-ray depending on where the pre-amplifier voltage happened to be at its time of arrival.
The effects of non-linearity in detector amplification and digitization have not been addressed in existing practice even though commercially available X-ray detectors may often incorporate a charge-sensitive preamplifier within the detector enclosure to minimize signal noise. The non-linearity effects generally have weak dependence on temperature, so that there is no significant drift of the non-linear response. A one-time calibration of a particular instrument may be sufficient to compensate for the non-linear effects. However, an efficient method of conducting a calibration to compensate for the non-linear effects is lacking in the existing market.