In absorption spectroscopy, such as TDLAS and cavity enhanced spectroscopy, an absorption spectrum measured by an instrument allows the calculation of gas and/or isotope concentration by interpretation of spectral data. Spectral data is collected from the observed response of a detector to an optical beam that has interacted with a test gas. The interpretation of the spectrum can be as simple as measuring the height of detector peaks, but is generally more complicated, requiring an algorithm that models the spectrum. TDLAS, including all types of cavity enhanced spectroscopy, generally involves measuring an absorption spectrum followed by a numerical treatment of the measured spectrum to extract the relevant concentration and/or isotope data. (See, for example, Wolfgang Demtröder, Laser Spectroscopy, 2nd ed., 1996.)
In general, as spectral complexity increases, the complexity of the spectral treatment/fitting also increases. Currently, the most common method for numerical processing of absorption spectra is fitting using one of a number of absorption line-shape models, including Lorentzian, Gaussian, Voigt, Galatry, speed dependent Voigt, and Rautian. These line shape models utilize a number of adjustable parameters to match the height and width of a theoretical shape feature to those of the measured absorption feature using an iterated, least-squares approach. These line-shape models also have the advantage of being easily adjusted for temperature and pressure variations in the sample. However, line-shape models sometimes fail for complicated absorption spectra, as they are unable to robustly distinguish between closely spaced absorption features. Even if the fit can distinguish between strongly overlapping features, the possibility of non-unique least-squares minimizations exist. Furthermore, the fits are often computationally impractical or do not converge reliably. Conventional spectral models have difficulty fitting complicated spectra, that involve multiple absorption lines and multiple absorbing species. This is a particularly challenging problem when attempting to measure trace compounds in mixtures of strongly absorbing species, or absorbing features where it is difficult to measure a baseline (no absorption) value. In addition, line-shape models can be challenging to implement when the optimal line shape is not known or when the absorption parameters are either not known or appear to be incorrect in the literature.
For more complicated absorption spectra, the current method of choice is a basis-set model, which uses measured spectral information (referred to as basis spectra) rather than simulating the spectral information using a line-shape model. Typically, a single-component mixture (e.g. a sample gas in a nonabsorbing background such as N2) is introduced into the absorption spectrometer and the spectrum of that single component is measured. A thorough basis-set treatment requires collection of a matrix of basis-set spectra covering the temperature and pressure ranges over which the absorption measurements will be made. This process is repeated for each of the absorbing gases that are expected to be present in the measured gas matrix. Alternatively, basis-set spectra can be collected for mixtures of gases which are expected to remain at constant concentration ratios within the test gas mixture. In this way, a library of basis spectra for each component or mixture of components is built. Then, the measured spectrum from the analyzed gas is processed using a least-squares fitting algorithm that generates a linear combination of the basis set spectra. The multiplicative factor for each basis spectrum is then used to determine the concentration of each of the subcomponents given the known concentration(s) at which the basis spectrum was collected.
The basis-set model method has the disadvantage that, especially for a complicated mixture, many basis set spectra must be individually measured. The basis-set method is also limited by the availability and purity of individual gases for measurement of the basis sets and by the non-uniqueness of least-squares minimization solutions for minimizations with a very large number of basis set spectra. Furthermore, for measurements made in extreme environments such as cross-stack experiments, it is often difficult to measure basis set spectra at the temperature and pressure conditions of interest. Finally, system effects and the presence of unknown absorbers can cause basis-set methods to be impractical and/or imprecise.
Previous work by Haaland et al. has attempted to address the last of these issues by adding additional estimated basis spectra to the measured basis spectra (U.S. Pat. Nos. 6,415,233 and 6,687,620). In these techniques, an estimate of a source of spectral variation, often the residuals of a classical least squares fit, is used to create an additional basis spectrum which is then used in addition to measured basis spectra in a classical least square treatment of the spectral data. Kane et al. have used a comparable approach for treatment of background signals where a singular value decomposition is used to estimate basis spectra for background signals (U.S. Pat. Nos. 7,003,436 and 7,092,852). In both of these approaches, estimated spectra are used only for those components of the mixture or other spectral features which the instruments are unable to measure.
A problem that exists is that there is not a way to interpret complicated absorption spectra except for a tedious basis-set spectral fit, with all of the associated drawbacks of a basis-set approach. An object of the invention was to produce a faster and more easily implemented approach based on the characteristics of the sample gas mixture and its absorption spectrum for absorption spectral interpretation to measure test gas and/or isotope concentrations in gas mixtures with complicated spectra.