A process is non-stationary if its statistical moments change across space, time or sampling interval. Characterization of non-stationary processes is a challenge in many engineering and scientific disciplines. Climate change modeling and projection, retrieving drop-size information from Doppler measurements of hydrometeors, modeling calibration architectures and algorithms in microwave radiometers are example applications that can benefit from improvements in the analysis and modeling of non-stationary processes.
Calibration provides the condition by which assignment of value can be made and the means of discriminating a signal from background noise. Calibration using standard references provides the means by which we can compare measurements across space and time. These properties make applicable the use of calibrated noise measurements to detecting varying statistical properties of non-stationary processes and quantifying how the statistics change with time, location, sampling interval, etc.
The concept of Ensemble Detection originates from mathematical modeling of radiometer systems and the measurement of calibrated noise. Radiometers have to be frequently, and usually periodically, calibrated to correct for fluctuations in the receiver response. FIG. 1 illustrates a general model for a radiometer system. The signal to be measured 101, denoted by TA, is received by the antenna 102, and input to the receiver 100. The receiver 100 comprises a representative gain 103, g(t), a pre-detection filter 104, a square-law detector 105, and a post-detection filter 106. An output voltage 107 is generated which can be used to estimate the noise power of the signal being measured 101. There are a number of sources that cause non-stationary fluctuations in the radiometer response. In practice, many sources of fluctuations can be described by a time varying fluctuation in the receiver gain, g(t). Noise reference measurements are used within a calibration algorithm to remove instrumental effects from the brightness scene being measured.
Previous attempts to model receiver fluctuations within the context of a calibration algorithm relied on the spectral representation of the fluctuations. However spectral representation of non-stationary signals is mathematically inconsistent and is limited in its practical application. Other techniques for modeling non-stationary fluctuations, such as in the receiver response, exist; wavelet analysis, evolutionary spectra, and windowed processes are examples. Each approach has its limitation in either analytical modeling and/or empirical analysis of non-stationary fluctuations. Empirical Mode Decomposition and Hilbert-Huang Transform are promising and powerful tools for non-stationary data analysis. However, these tools are empirically based and lack a theoretical framework to describe, for example, the measurement of calibrated noise.
A technique has been developed which uses measurement uncertainty as a figure of merit to compare the performance of various radiometer calibration architectures and algorithms. The technique, derived from stochastic process theory, treats the output of a radiometer that samples multiple noise references as an ensemble collection of measurements of the receiver fluctuations. This formulation enables the comparison of statistical analysis of data with theoretical calculation of the measurement uncertainty. The mathematics provides the foundation for Ensemble Detection and its application.
Analyses of measured signals have traditionally been limited to a single measurement series, i.e., a single realization. Fourier Analysis, Auto Regressive Moving Average, and Empirical Mode Decomposition are example signal processing techniques that are applied to single realizations of a process. An ensemble set is comprised of multiple realizations of a process. There are many practical applications of statistical analysis using ensemble sets of data. As an example, the Intergovernmental Panel on Climate Change in its fourth climate assessment used an ensemble set derived from multi-model analyses (output from different climate models) to estimate the uncertainty in climate model projections. The value in statistical analysis of ensemble sets gives rise to a need for new means for producing ensemble sets of data.
It would be advantageous to alleviate the problems associated with the prior art and to produce ensemble data sets for which empirical analysis has a direct link to stochastic process theory. Accordingly, it would be desirable to provide a method and system that addresses at least some of the problems identified above.