The quantification of the amount of radioactivity is not an exact process. There is always an uncertainty in the quantity that has been determined.
One contribution to the total uncertainty is commonly called “counting statistics” and arises from the fact that the measurement process counts discrete events that occur in a random manner from the decay of the radioactive atoms. The evaluation of the uncertainty from this process is well known and can be determined by mathematical techniques.
Another contribution to the total uncertainty is the uncertainty in the calibration factor. Calibration factors are necessary to relate the measured quantity to the quantity emitted from the radioactive source. Calibration factors are also referred to as interaction probabilities or detection efficiencies. These calibration factors can be determined by the measurement of well known radioactive sources that have been prepared in a manner to closely mimic the unknown sample being measured. Alternatively, calibration factors can be determined by a mathematical process whereby the radiation physical parameters of the sensor and the sample are defined, and the physics of radiation interaction with materials is defined, and where the probability of radiation from the sample interacting with the sensor is computed mathematically. One such example of mathematical computation method for efficiency calibration is described in the U.S. Pat. No. 6,228,664, titled “Calibration Method for Radiation Spectroscopy”, issued May 8, 2001, to the inventors of the present invention, and assigned to the assignee of the present invention/application.
Then, either the source-based calibration factor or the mathematically computed calibration factor is used to convert the measurement instrument output into the quantity of radioactivity of the sample being measured. There is always some amount of imprecision or uncertainty associated with the calibration factor, even if the radioactive calibration source or the mathematical calibration model perfectly represents the sample being measured. This is due to the random factors involved in the radioactive decay and measurement process. The method of computation of this portion of the uncertainty in the calibration factor is also well known.
If the sample being measured is exactly like the radioactive source used for the source-based calibration or the mathematical model used for the mathematical calibration, then the propagation of the counting statistics uncertainty and the calibration factor uncertainty are adequate to compute the total uncertainty of the measurement. But that frequently is not the case. There are many situations where the sample measurement conditions are different in a radiologically significant manner from those used or defined in the calibration process. Examples include but are not limited to: sample density variations; sample composition variations; sample non-uniformity; source-detector distance variations; sample container variations; sample size variations; to name a few. Where these variations are known, they can be included in the calibration factor. But where they are not known or unpredictable then they must be treated as an uncertainty and propagated into the total measurement uncertainty. It is the computation of this component of the total measurement uncertainty that is the subject of this invention.
The traditional method of assigning uncertainty to these situations where there are variations in the sample measurement conditions is to consider one variable at a time, e.g. sample density variation, and evaluate its contribution to the uncertainty. Then evaluate the next variable, and so on in turn. Finally, combine the results according to conventional statistical methodology. The disadvantages of this method are that it is somewhat subjective, and that it doesn't account for the combined effects of multiple variables at the same time.
An alternate method would be to construct a very large number of radioactive calibration sources, where each of the variable dimensions or values in each of the large number of sources is chosen in a random manner, but where that choice of dimension or value follows the expected variation of that individual parameter. Then calibrate using each of this large number of sources and compute the mean and standard deviation of the calibration factor for this large number of source measurements. Use the mean for the actual efficiency, and use the standard deviation for the uncertainty. While this technique is quite correct, it is also technically challenging, very time consuming, and expensive.
Accordingly, it is a principal object of the invention to provide a method of probabilistic uncertainty estimation that produces results similar to those obtained by coning a very large number of radioactive calibration sources, as described above, but does so using mathematical modeling and numerical calculations.
Other objects of the invention, as well as particular features and advantages thereof, will be apparent or be elucidated in the following description and the accompanying drawing figures.