Certain methods of medical treatment of patients may require the monitoring of one or more analyte concentrations in bodily fluids, including gasses such as oxygen (O2) and carbon dioxide (CO2), electrolytes including sodium (Na+) and potassium (K+), metabolites including glucose and biomolecules such as proteins. In addition, such analytes may comprise entities suspended in the fluid, e.g. the concentration of red blood cells, as expressed by a parameter called haematocrit.
To facilitate such monitoring, the patient may be connected to a monitoring system which incorporates one or more sensor(s) for the detection of the analytes described above. This arrangement has a distinct advantage over a regime where patient samples are drawn intermittently and sampled on a standalone device such as a blood gas analyser. First, as the sensor is constantly exposed to the patient sample, changes in analyte levels are monitored in real-time. This enables the clinician to view trends in analyte concentration changes and to respond more quickly to critical changes. This reduces the risk that potential complications in the medical treatment of the patient are detected late.
In addition, this method is less susceptible to artefacts that may arise from drawing the sample from the patients and transporting it to the stand alone measuring device. Therefore, continuous monitoring reduces the risk of inappropriate clinical decisions being made based on inaccurate measurements arising due to sample handling errors. For example, CO2 and O2 levels in the blood can change in a sampling syringe with time, giving rise to inaccurate reading of the extracted patient sample. In addition, red blood cell lysis due to these extra sample handling steps can lead to changes in the concentration of blood electrolytes, particularly K+.
Many different types of sensor are known. For example, WO99/17107 discloses a glucose sensor for continuous monitoring. Many other examples are known to those who are skilled in the art.
In order for a sensor to monitor an accurate reading, the output of the sensor must first be calibrated by exposing the sensor to one or more fluids which contain a known concentration of the analyte of interest. Using interpolation and knowledge of the response curve of the sensor signal versus analyte concentration, it is possible to construct a calibration curve of the signal versus analyte concentration over the required measurement range. This calibration curve can then be used to determine the concentration of analyte in a test sample from the measured response of the sensor when it is exposed to the same test sample. The concept of calibration is well known to those who are skilled in the art.
A fundamental property of a sensor is that the calibration coefficients determined during calibration often change with time and temperature. A change in the calibration coefficients with time is often referred to as sensor drift. For example, pH sensors based on Ion-Sensitive Field-Effect Transistor (ISFET) technology have a significant baseline (offset) drift and can show changes in both offset and sensitivity with changes in temperature as for instance is disclosed in “ISFET, Theory and Practice” by P. Bergveld in the Proc. IEEE Sensor Conference Toronto, October 2003, Pages 1-26. Amperometric oxygen sensors often show changes in sensitivity with time. Other drift characteristics of typical sensors are well known to those who are skilled in the art.
Several examples of drift correction and temperature dependence correction exist in the prior art. For example, in order to overcome the reduction in measurement accuracy due to sensor drift, the sensor can be periodically recalibrated by exposing it to one or more solution(s) with known analyte concentrations. The new calibration coefficients can then be applied to remove the contribution of sensor drift to the sensor reading. In addition, to compensate for the temperature dependence of the sensor readings, it is necessary to know the temperature that the calibration was performed at (Tcal) and the temperature of the sample being measured (Tsample). If the temperature dependence of the sensor is well characterised, it is possible to apply a transformation function f(Tcal,Tsample) for the sensitivity (a) and/or offset (b) respectively to remove the effect of the temperature change on the sensor response since the last calibration.
For sensors that exhibit a drift in temperature dependence with time, the situation is much more challenging. For these sensors, it is necessary to express the transformation function as a function of time f(Tcal,Tsample,t). Obtaining an accurate model for this transformation is often extremely difficult, due to e.g. a variation in sensor characteristics from batch-to-batch.
For sensors with poorly characterised drift and/or temperature dependence, the problems described above are usually addressed by making intermittent readings of the patient sample on a second, standalone analyser. The analyte level measurement from this second analyser can then be compared with the reading from the sensor in the continuous monitoring system at the time the sample was taken. Any differences between these readings are attributed to sensor drift and the calibration coefficients can be adjusted to ensure the concentration measured by the continuous monitoring system at the time of sampling and the concentration measured by the second analyser are the same.
This process is known in the prior art as realignment. This realignment method has several significant drawbacks. First, the need to withdraw a sample from the system for analysis creates the potential for sample handling errors, e.g. gas levels in the sample may change in the interval between sampling and measurement. In addition, due to limitations on the sample volume that can be drawn from a patient, the frequency of realignment is limited. This can compromise the reliability of the measurement, as sensor drift between realignment is not accounted for. The realignment process is also time consuming and inconvenient for the user. Finally, the realignment process does not allow the temperature dependence of the sensor to be measured accurately, so significant changes in temperature after the realignment period can lead to errors in the measurement results.