Light conveys information through data. When light interacts with matter, for example, it carries away information about the physical and chemical properties of the matter. A property of the light, for example, its intensity, may be measured and interpreted to provide information about the matter with which it interacted. That is, the data carried by the light through its intensity may be measured to derive information about the matter. Similarly, in optical communications systems, light data is manipulated to convey information over an optical transmission medium, for example fiber optic cable. The data is measured when the light signal is received to derive information.
In general, a simple measurement of light intensity is difficult to convert to information because it likely contains interfering data. That is, several factors may contribute to the intensity of light, even in a relatively restricted wavelength range. It is often impossible to adequately measure the data relating to one of these factors since the contribution of the other factors is unknown.
It is possible, however, to derive information from light. An estimate may be obtained, for example, by separating light from several samples into wavelength bands and performing a multiple linear regression of the intensity of these bands against the results of conventional measurements of the desired information for each sample. For example, a polymer sample may be illuminated so that light from the polymer carries information such as the sample's ethylene content. Light from each of several samples may be directed to a series of bandpass filters which separate predetermined wavelength bands from the light. Light detectors following the bandpass filters measure the intensity of each light band. If the ethylene content of each polymer sample is measured using conventional means, a multiple linear regression of ten measured bandpass intensities against the measured ethylene content for each sample may produce an equation such as:y=a0+a1w1+a2w2+ . . . +a10w10  (Equation 1)where y is ethylene content, an are constants determined by the regression analysis, and wn is light intensity for each wavelength band.
Equation 1 may be used to estimate ethylene content of subsequent samples of the same polymer type. Depending on the circumstances, however, the estimate may be unacceptably inaccurate since factors other than ethylene may affect the intensity of the wavelength bands. These other factors may not change from one sample to the next in a manner consistent with ethylene.
A more accurate estimate may be obtained by compressing the data carried by the light into principal components. To obtain the principal components, spectroscopic data is collected for a variety of samples of the same type of light, for example from illuminated samples of the same type of polymer. For example, the light samples may be spread into their wavelength spectra by a spectrograph so that the magnitude of each light sample at each wavelength may be measured. This data is then pooled and subjected to a linear-algebraic process known as singular value decomposition (SVD). SVD is at the heart of principal component analysis, which should be well understood in this art. Briefly, principal component analysis is a dimension reduction technique, which takes m spectra with n independent variables and constructs a new set of eigenvectors that are linear combinations of the original variables. The eigenvectors may be considered a new set of plotting axes. The primary axis, termed the first principal component, is the vector, which describes most of the data variability. Subsequent principal components describe successively less sample variability, until only noise is described by the higher order principal components.
Typically, the principal components are determined as normalized vectors. Thus, each component of a light sample may be expressed as xn zn, where xn is a scalar multiplier and zn is the normalized component vector for the nth component. That is, zn is a vector in a multi-dimensional space where each wavelength is a dimension. As should be well understood, normalization determines values for a component at each wavelength so that the component maintains it shape and so that the length of the principal component vector is equal to one. Thus, each normalized component vector has a shape and a magnitude so that the components may be used as the basic building blocks of all light samples having those principal components. Accordingly, each light sample may be described in the following format by the combination of the normalized principal components multiplied by the appropriate scalar multipliers:x1z1+x2z2+ . . . +xnzn.
The scalar multipliers xn may be considered the “magnitudes” of the principal components in a given light sample when the principal components are understood to have a standardized magnitude as provided by normalization.
Because the principal components are orthogonal, they may be used in a relatively straightforward mathematical procedure to decompose a light sample into the component magnitudes, which accurately describe the data in the original sample. Since the original light sample may also be considered a vector in the multi-dimensional wavelength space, the dot product of the original signal vector with a principal component vector is the magnitude of the original signal in the direction of the normalized component vector. That is, it is the magnitude of the normalized principal component present in the original signal. This is analogous to breaking a vector in a three dimensional Cartesian space into its X, Y and Z components. The dot product of the three-dimensional vector with each axis vector, assuming each axis vector has a magnitude of 1, gives the magnitude of the three dimensional vector in each of the three directions. The dot product of the original signal and some other vector that is not perpendicular to the other three dimensions provides redundant data, since this magnitude is already contributed by two or more of the orthogonal axes.
Because the principal components are orthogonal, or perpendicular, to each other, the dot, or direct, product of any principal component with any other principal component is zero. Physically, this means that the components do not interfere with each other. If data is altered to change the magnitude of one component in the original light signal, the other components remain unchanged. In the analogous Cartesian example, reduction of the X component of the three dimensional vector does not affect the magnitudes of the Y and Z components.
Principal component analysis provides the fewest orthogonal components that can accurately describe the data carried by the light samples. Thus, in a mathematical sense, the principal components are components of the original light that do not interfere with each other and that represent the most compact description of the entire data carried by the light. Physically, each principal component is a light signal that forms a part of the original light signal. Each has a shape over some wavelength range within the original wavelength range. Summing the principal components produces the original signal, provided each component has the proper magnitude.
The principal components comprise a compression of the data carried by the total light signal. In a physical sense, the shape and wavelength range of the principal components describe what data is in the total light signal while the magnitude of each component describes how much of that data is there. If several light samples contain the same types of data, but in differing amounts, then a single set of principal components may be used to exactly describe (except for noise) each light sample by applying appropriate magnitudes to the components.
The principal components may be used to accurately estimate information carried by the light. For example, suppose samples of a certain brand of gasoline, when illuminated, produce light having the same principal components. Spreading each light sample with a spectrograph may produce wavelength spectra having shapes that vary from one gasoline sample to another. The differences may be due to any of several factors, for example differences in octane rating or lead content.
The differences in the sample spectra may be described as differences in the magnitudes of the principal components. For example, the gasoline samples might have four principal components. The magnitudes xn of these components in one sample might be J, K, L, and M, whereas in the next sample the magnitudes may be 0.94 J, 1.07K, 1.13 L and 0.86M. As noted above, once the principal components are determined, these magnitudes exactly describe their respective light samples.
Refineries desiring to periodically measure octane rating in their product may derive the octane information from the component magnitudes. Octane rating may be dependent upon data in more than one of the components. Octane rating may also be determined through conventional chemical analysis. Thus, if the component magnitudes and octane rating for each of several gasoline samples are measured, a multiple linear regression analysis may be performed for the component magnitudes against octane rating to provide an equation such as:y=a0+a1x1+a2x2+a3x3+a4x4  (Equation 2)where y is octane rating, an are constants determined by the regression analysis, and x1, x2, x3 and x4 are the first, second, third and fourth principal component magnitudes, respectively.
Using Equation 2, which may be referred to as a regression vector, refineries may accurately estimate octane rating of subsequent gasoline samples. Conventional systems perform regression vector calculations by computer, based on spectrograph measurements of the light sample by wavelength. The spectrograph system spreads the light sample into its spectrum and measures the intensity of the light at each wavelength over the spectrum wavelength range. If the regression vector in the Equation 2 form is used, the computer reads the intensity data and decomposes the light sample into the principal component magnitudes xn by determining the dot product of the total signal with each component. The component magnitudes are then applied to the regression equation to determine octane rating.
To simplify the procedure, however, the regression vector is typically converted to a form that is a function of wavelength so that only one dot product is performed. Each normalized principal component vector zn has a value over all or part of the total wavelength range. If each wavelength value of each component vector is multiplied by the regression constant an corresponding to the component vector, and if the resulting weighted principal components are summed by wavelength, the regression vector takes the following form:y=a0+b1u1+b2u2+ . . . +bnun  (Equation 3)where y is octane rating, a0 is the first regression constant from Equation 2, bn is the sum of the multiple of each regression constant an from Equation 2 and the value of its respective normalized regression vector at wavelength n, and un is the intensity of the light sample at wavelength n. Thus, the new constants define a vector in wavelength space that directly describes octane rating. The regression vector in a form as in Equation 3 represents the dot product of a light sample with this vector.
Normalization of the principal components provides the components with an arbitrary value for use during the regression analysis. Accordingly, it is very unlikely that the dot product result produced by the regression vector will be equal to the actual octane rating. The number will, however, be proportional to the octane rating. The proportionality factor may be determined by measuring octane rating of one or more samples by conventional means and comparing the result to the number produced by the regression vector. Thereafter, the computer can simply scale the dot product of the regression vector and spectrum to produce a number approximately equal to the octane rating.
In a conventional spectroscopy analysis system, a laser directs light to a sample by a bandpass filter, a beam splitter, a lens and a fiber optic cable. Light is reflected back through the cable and the beam splitter to another lens to a spectrograph. The spectrograph separates light from the illuminated sample by wavelength so that a detection device such as a charge couple detector can measure the intensity of the light at each wavelength. The charge couple detector is controlled by controller and cooled by a cooler.
The detection device measures the light intensity of light from the spectrograph at each wavelength and outputs this data digitally to a computer, which stores the light intensity over the wavelength range. The computer also stores a previously derived regression vector for the desired sample property, for example octane, and sums the multiple of the light intensity and the regression vector intensity at each wavelength over the sampled wavelength range, thereby obtaining the dot product of the light from the substance and the regression vector. Since this number is proportional to octane rating, the octane rating of the sample is identified.
Since the spectrograph separates the sample light into its wavelengths, a detector is needed that can detect and distinguish the relatively small amounts of light at each wavelength. Charge couple devices provide high sensitivity throughout the visible spectral region and into the near infrared with extremely low noise. These devices also provide high quantum efficiency, long lifetime, imaging capability and solid-state characteristics. Unfortunately, however, charge couple devices and their required operational instrumentation are very expensive. Furthermore, the devices are sensitive to environmental conditions. In a refinery, for example, they must be protected from explosion, vibration and temperature fluctuations and are often placed in protective housings approximately the size of a refrigerator. The power requirements, cooling requirements, cost, complexity and maintenance requirements of these systems have made them impractical in many applications.
Multivariate optical computing (MOC) is a powerful predictive spectroscopic technique that incorporates a multi-wavelength spectral weighting directly into analytical instrumentation. This is in contrast to traditional data collection routines where digitized spectral data is post processed with a computer to correlate spectral signal with analyte concentration. Previous work has focused on performing such spectral weightings by employing interference filters called Multivariate Optical Elements (MOEs). Other researchers have realized comparable results by controlling the staring or integration time for each wavelength during the data collection process. All-optical computing methods have been shown to produce similar multivariate calibration models, but the measurement precision via an optical computation is superior to a traditional digital regression.
MOC has been demonstrated to simplify the instrumentation and data analysis requirements of a traditional multivariate calibration. Specifically, the MOE utilizes a thin film interference filter to sense the magnitude of a spectral pattern. A no-moving parts spectrometer highly selective to a particular analyte may be constructed by designing simple calculations based on the filter transmission and reflection spectra. Other research groups have also performed optical computations through the use of weighted integration intervals and acousto-optical tunable filters digital mirror arrays and holographic gratings.
The measurement precision of digital regression has been compared to various optical computing techniques including MOEs, positive/negative interference filters and weighted-integration scanning optical computing. In a high signal condition where the noise of the instrument is limited by photon counting, optical computing offers a higher measurement precision when compared to its digital regression counterpart. The enhancement in measurement precision for scanning instruments is related to the fraction of the total experiment time spent on the most important wavelengths. While the detector integrates or co-adds measurements at these important wavelengths, the signal increases linearly while the noise increases as a square root of the signal. Another contribution to this measurement precision enhancement is a combination of the Felgott's and Jacquinot's advantage, which is possessed by MOE optical computing.
While various methodologies have been developed to enhance measurement accuracy in Optical Analysis Systems, the industry requires a system in which the spectral range of the illumination source can be controlled; in which light can be shined directly onto a sample with or without fiber optic probes; and in which the reflected or transmitted light can be analyzed in real time or near real time.