1. Field of the Invention
The present invention relates to a method of and an apparatus for measuring a concentration of an object component in a fluid (gas or liquid) sample by measuring the absorbed ratio (or transmitted ratio) of light passing through the sample.
2. Description of the Related Art
When, as shown in FIG. 5, an incident light having a strength I0 travels a distance L in a sample gas or liquid containing a component (an object gas or liquid) of concentration C, the strength I of the light becomes, according to the Lambert-Beer's law, EQU I-I0.multidot.exp(-K.multidot.C.multidot.L) (10)
where K is the absorption constant of the component.
The above formula is applicable when the light used is theoretically monochromatic. But when a non-dispersive monochromator such as an optical filter is used to obtain the monochromatic light, which is often the case in normal absorption measuring apparatus, the light is not purely monochromatic. Further, ordinary compact absorption measuring devices are designed to obtain a longer effective traveling distance L of the light so that the light is not parallel in the sample cell but is reflected irregularly by the inner wall of the sample cell. Since, in this case, the traveling distance L of the light passing through the sample is not unique, the Lambert-Beer's law does not apply exactly to actual absorption measuring devices.
Thus conventionally, the data of the light absorption (or transmissivity) of a sample gas is not used by itself but it is compared to that of a reference gas having a known concentration of the object gas. In this case, two absorption (or transmissivity) measurements must be conducted, one for the sample gas and the other for the reference gas. A problem here is that there may arise various differences in the measuring conditions of the two measurements: that is, strength of the source light, light absorbing characteristic of the filters used in the optical paths, quality of the photodetectors and amplifiers, contamination of the sample cells, temperature, pressure, etc. The differences in the measuring conditions cause a zero drift or a span drift in the measured data.
Actually, the calibration curve method is widely used in which a calibration curve (or a working curve) is first established showing a relationship between the concentration and the transmissivity. The calibration curve is made as follows. First a plurality of standard samples are prepared having different known concentrations within a measurement range (for example, 0%, 20%, 40%, 60%, 80%, and 100% of the full scale). Then the light transmissivities of these standard samples are measured, and the measured data is plotted against the known concentration. Since the measured data does not generally constitute a straight line, a linearization device is required for determining corrected scales or performing line approximation. Further, since the characteristics of the calibration curve change according to the aging of each component of the device, regular correction of the calibration curve as of every several (normally one through six) months is thereby recommended (see JIS (Japanese Industrial Standards) K0055 `GENERAL RULES OF GAS ANALYZER CORRECTION`, JIS K0115 `GENERAL RULES OF ABSORPTION SPECTROSCOPY`, and JIS K0151 `INFRARED GAS ANALYZERS`).