1. Field of the Invention
This invention generally relates to nuclear meters, and particularly to an apparatus or a method for reducing the deleterious effects caused by pulse pileup in a neutron-capture-based elemental analyzer for on-line measurement of bulk substances.
2. Description of the Prior Art
The rising cost of fuels, coupled with the need to avoid atmospheric pollution when burning them, has led to the requirement that their composition be known at various points in the fuel-preparation cycle. For example, because of the scarcity of low-sulfur crude oils and the cost of sulfur removal, the value of fuel oil increases significantly as its sulfur content becomes lower, indicating that accurate fuel-oil blending to a fixed sulfur level consistent with allowable amounts of pollution is both cost effective and an efficient utilization of increasingly-scarce hydrocarbons. Furthermore, precise knowledge of the heat content of fuel oil allows furnaces and boilers to be operated in a more efficient manner. In addition, knowledge of the amount of sulfur and other contaminants such as vanadium and nickel in various hydrocarbon streams can help prevent the poisoning of catalysts used in oil refineries, avoiding costly shut downs.
In the case of coal, sulfur content is generally higher than that of oil, making the pollution problem even more severe. As a result, expensive coal-cleaning plants, stack-gas scrubbers and precipitators are necessary, all of which can be operated more efficiently if the coal composition is known on a real-time on-line basis. Efficient boiler operation also benefits from this composition measurement, and knowing the composition of the ash in the coal can be used to avoid boiler slagging, which is a costly problem that is generally absent for fuel oil.
Particularly in the case of coal, but also for oil, these composition measurements have to be made on inhomogeneous substances with high mass flow rates and variable compositions. Thus, this measurement should continuously reflect the average composition of the bulk substance, and response times should be fast enough to permit effective process control. Generally the latter requirement implies a speed of response ranging from a few minutes up to an hour.
A technique which can satisfy these requirements can often be used in applications which do not involve fuels or their derivatives. For example, it could measure the nitrogen content of wheat in order to determine the amount of protein present, which in turn is related to food value. Thus, the measurement of fuels is illustrative only and is not essential to this invention, which applies to all measurements of bulk substances by the techniques to be described hereinafter.
Several methods for composition measurement are known in the prior art, the most obvious one being sampling followed by chemical analysis. This technique provides most present data on the composition of various bulk substances. Unfortunately sampling is inherently inaccurate because of the lack of homogeneity of bulk materials, and large continual expenditures for manpower, sampling devices and chemical-analysis equipment are required to provide response times which at best could approach one hour. These disadvantages lead to the consideration of other techniques which are faster, more subject to automatic operations and more of an on-line continuous bulk measurement.
One technique often used in industrial environments for elemental analysis involves X-ray fluorescence. This technique relies on the fact that each atom emits X rays with distinct and well-known energies when external radiations disturb its orbital electrons. Unfortunately, sulfur, which is an interesting element from the standpoints of air pollution and catalyst poisoning, emits mostly 2-keV X rays, which can only traverse about 0.1 mm of a typical fuel. Iron, which is one of the elements generating the highest-energy X rays in coal, produces mostly a 6-keV X ray, which also cannot escape from any appreciable amount of coal or other nongaseous fuel. Thus, the use of X-ray fluorescence for other than gaseous materials requires either the preparation or the vaporization of a sample in an atmosphere which does not confuse the measurement. In either case, a difficult sampling and sample-preparation problem compounds the errors associated with X-ray fluorescence itself.
A second technique usually involving X rays which are more penetrating is X-ray absorption. In this case one measures the differences in the absorption or scattering of X rays caused by changes in the amounts of certain elements. In the case of relatively-pure hydrocarbons such as refined fuel oil, this technique can provide a useful measurement of sulfur content because sulfur at X-ray energies near 22 keV can have a predominant effect on the X-ray absorption. This predominance, however, is dependent on the lack of most of the metals which are present in coal and may also be present in oil. In addition, 22-keV X rays only penetrate about 2 mm in most non-gaseous fuels, making sampling still a requirement. Moreover, this technique is generally limited to measuring only one of several potentially interesting elements, and the measurement of the relative amounts of many different elements in a complex mixture such as coal becomes difficult.
Nonetheless, nuclear techniques in general remain attractive because they often can be automated and in principal do not require actual manipulation of the bulk material itself. The problems with X-ray fluorescence and absorption arise partly because the associated radiations are not sufficiently penetrating. However, because the energetic gamma rays produced by the capture of thermal neutrons will penetrate over 100 mm of most fuels, an analysis technique based on them can provide an accurate, continuous, on-line measurement of the elemental composition of bulk substances without sampling.
This technique is based on the fact that almost all elements when bombarded by slow neutrons capture these neutrons at least momentarily and form a compound nucleus in an excited state. Usually the prompt emission of one or more gamma rays with energies and intensities which are uniquely characteristic of the capturing nucleus dissipates most of this excitation energy. Because these prompt gamma rays often have energies in the 2- to 11-MeV range, they can penetrate substantial quantities of material to reach a gamma-ray detector and its associated electronics which provide a measurement of their energy spectrum. Thus, for those isotopes with significant capture cross sections and prominent gamma-ray lines, measurement of the number of prompt gamma rays present at various energies can be used to determine in an on-line, real-time basis the quantity of most of the elements present in bulk substances, which can be flowing through the analyzer.
Although these techniques have been used in the laboratory under controlled conditions, their implementation in an automatic, on-line instrument placed in an industrial environment presents unique problems. One of these problems results from the need to provide simultaneously a fast speed of response and good accuracy. Because counting capture gamma rays is a random process, it is subject to statistical variations. These variations produce fluctuations in the measured elemental compositions, which decrease as the number of detected events increases. Thus good accuracy requires large numbers of detected events, which in turn requires high counting rates and/or long counting times. As a result, a fast speed of response together with acceptable statistical fluctuations requires high counting rates. High counting rates, however, then lead to problems with pulse pileup as discussed hereinafter.
The minimization of systematic errors in the composition measurement requires that the energy of the detected capture gamma rays be measured with good resolution. For both semiconductor and scintillation gamma-ray detectors, good energy resolution implies that pulse amplifiers must convert the detector pulses into amplifier pulses with widths in the range from 2.times.10.sup.-7 s to 10.sup.-5 s. These relatively-long pulses are necessary to filter noise, which can originate either in the amplifiers themselves or in the gamma-ray detector, and to collect the majority of the charge produced by the detector in response to a gamma-ray interaction.
However, during this integration or filtering interval a subsequent event could produce an additional output from the gamma-ray detector, and this pulse could then add to the one already being processed in the pulse amplifier. The resultant combined pulse exemplifies "pulse pileup" in that it is the result of two or more pulses piling up on each other to generate a combined pulse which does not represent the energy of any of the individual detected events. Including such pileup events in the spectral measurement adds to errors in the composition measurement, and, as counting rates are increased to reduce statistical fluctuations and/or response time, pileup events rapidly cause excessive errors. Thus the desire for good accuracy and a fast speed of response inevitably leads to some form of pileup detection which permits the removal of most of the pileup events from the measured energy spectrum. As this pileup detection becomes more efficient, then counting rates can be increased accordingly, resulting in reduced statistical fluctuations and/or response times.
Typically pileup-detection schemes involve two amplifier chains processing the same detector signal with different response times. The slow-amplifier chain produces pulses with widths which give good energy resolution, and its output is used for the measurement of the energy spectrum of the capture gamma rays. The fast-amplifier chain on the other hand produces a much-narrower pulse used primarily to define the time of arrival of an event. Thus, if circuits observing the output of the fast-amplifier chain determine that two or more events arrived during the interval lasting from the time of arrival of the first pulse until the time when the energy measurement is no longer sensitive to pulse pileup, then the output of the slow-amplifier chain can be ignored for such an event, reducing pileup-induced errors.
In the prior art the output pulse from the fast-amplifier chain was applied to a single discriminator with a fixed threshold. This circuit converted the fast pulse into a digital signal which was high for the entire interval during which the fast pulse exceeded the threshold. The transition at the leading edge of this digital signal approximately defined the event-arrival time. If two events producing fast pulses exceeding the discriminator threshold arrived at times separated by an interval which was larger than the time which the discriminator output remained high from the first event, they would be recognized as distinct events. If only one event was detected during an integration interval defined by noise and accuracy criteria, then for a selected range of pulse amplitudes a linear gate was opened, and the detector signal was integrated during that interval. Subsequent pulse-height-analysis circuits then determined the size of this integrated signal to provide the energy-spectrum measurement. The gated integrator and the pulse-height-analysis circuits constituted the slow-amplifier chain in this case.
Although this technique rejected many pileup events, it had several disadvantages leading to poorer accuracies and/or response times. One such disadvantage resulted from the fact that amplifier noise and/or fluctuations in the detector output signal limited the narrowness of the fast pulse used for event detection. For such a pulse with a non-zero width, a single discriminator as used in the prior art could not distinguish events arriving closer together in time than nearly the full width of this pulse. In addition, because those events with amplitudes below the discriminator threshold were not detected at all, they contributed to pileup-induced errors just as if no pileup rejection was present. As the discriminator threshold was reduced to decrease the number of these events, then generally the fast pulse had to be made wider to avoid false alarms caused by noise. As a result the inherent effectiveness of the pileup rejection in the prior art was overly limited, and tolerable counting rates were lower than desired.
The choice of the optimum threshold became even more complicated for scintillation gamma-ray detectors where statistical variations in the photocathode current during the scintillation process was the principal noise source. As a result the optimum discriminator threshold depends on the energy of the detected event, and the prior-art systems had no provision for implementing this energy-dependent optimization.
An additional problem with prior-art systems arose from their use of linear gates followed by relative-slow pulse-height analyzers. This technique had acceptable dead times only if a second discriminator observing the fast pulse permitted the linear gate to open only for a small fraction of the number of detected events. This restriction required that only the portion of the energy spectrum with low counting rates could be analyzed if live time was to remain high in order to keep statistical fluctuations low. However, in general this restriction is undesirable because high-counting-rate portions of the energy spectrum also contain useful information.