Breath is a unique bodily fluid. Unlike blood, urine, feces, saliva, sweat and other bodily fluids, it is available on a breath to breath and therefore continuous basis. It is readily available for sampling non-invasively and because the lung receives all of the blood flow from the right side of the heart, measurements of analytes/compounds in breath correlate strongly and reproducibly with blood concentration. It is less likely to be associated with the transfer of serious infections than other bodily fluids and collection of samples is straightforward and painless.
Further, exhaled breath contains 100% humidity at 37° C. (body temperature), thus it can be considered an aerosol. If the temperature of the collected sample is maintained at 37° C. or higher it will remain in this state and can be treated as a gas for compounds that are insoluble in water or readily diffuse out of water. In this instance, sensors designed to work with gaseous media would be preferable. For compounds that are highly water soluble and likely to remain in solution, the exhaled breath sample can be collected as a condensate when cooled. This liquid can then be analyzed with sensors that are designed for liquid-based analyses. Compounds likely to be detectable in the gas phase typically are lipophilic (hydrophobic) such as the intravenous anesthetic agent, propofol, while compounds likely to be detected in the liquid phase are hydrophilic, such as glucose, lacetic acid and perhaps even electrolytes. Thus an exhaled breath sample can be handled to produce a gaseous matrix for certain compounds and sensors, and a liquid matrix for others. In instances where it is desirable to detect more than one compound (e.g., detection of hydrophilic and hydrophobic molecules in the breath), the sample can be split and a portion maintained as a gas and a portion condensed as a liquid.
An example of the unique characteristic of breath is the correlation between blood concentrations of drugs, both licit and illicit, and their concentration in the breath. The concentration of a drug in a patient's body is generally regulated both by the amount of drug ingested by the patient over a given time period, or the dosing regimen, and the rate at which the drug is metabolized and eliminated by the body.
Historically, pharmaceutical compositions were delivered to patients according to standard doses based on the patient's weight. In the early 1970s, it was discovered with epileptic patients that pharmaceutical treatment with dosages adjusted according to blood concentration of the drug was far more efficient and demonstrated better seizure control and fewer side effects than with dosages adjusted according to patient weight.
It is now generally accepted that with many medications, it is necessary to monitor the concentration in the blood stream in order to ensure optimal, therapeutic drug effect (therapeutic drug monitoring [TDM]). Medications are ineffective if blood concentration levels are too low. Moreover, certain medications are toxic to the body when concentration levels in the blood are too high. It would also be valuable to have a means for monitoring drug concentration in blood for medications that do not require constant monitoring. By monitoring blood serum drug levels, medication dosage can be individualized within a therapeutically effective range.
For example, patients prescribed tricyclic (or tetracyclic) antidepressants (TCAs) require frequent monitoring of blood levels. TCAs work by inhibiting serotonin and norepinephrine reuptake into the synaptic cleft. This group includes among its members the tricyclics: amitriptyline, imipramine, nortriptyline, and clomipramine, and the tetracyclics maprotiline and amoxapine. Although highly effective for the treatment of depression, TCAs have a high incidence of side effects, some of which may be life-threatening, especially when blood concentrations are too high. Consequently, TCAs have been largely replaced by serotonin reuptake inhibitors (SSRIs) for treatment of depression. In addition to the toxic effects of TCAs due to inhibition of sodium and potassium channels, which occurs primarily in the heart and brain, TCAs can also cause side effects due inhibition of norepinephrine reuptake and elevated norepinephrine levels. The latter can cause sedation, manic episodes, profuse sweating, palpitations, increased blood pressure, tachycardia, twitches and tremors of the tongue or upper extremities, and weight gain.
Although SSRIs are no more, or may actually be slightly less effective than TCAs, TCAs are less attractive because they are more toxic than SSRIs and pose a greater threat of overdose. A TCA overdose results in central nervous system and cardiovascular toxicity making the relative risk of death by overdose with a TCA 2.5 to 8.5 times that with the commercially available SSRI fluoxetine. The greater danger with TCA is that side effects, as well as constant blood sampling, will persuade the patient to discontinue treatment. Studies indicate that patients taking a classical antidepressant (TCA or MAOI) are three times as likely to drop out of treatment due to side effects and constant monitoring as patients taking SSRIs. Interestingly, recent studies have shown that some SSRIs (and a similar group of drugs—selective norepinephrine uptake inhibitors [SNRIs]) have a “cut-off” below which the drugs are far less effective than at doses above the “cut-off”, but that this can only be determined by blood concentrations, not dosage due to large inter-patient variability. Thus, although drug manufacturers have tried to develop medications so “one dose fits all”, TDM might be applied more readily and improve drug effectiveness while reducing side effects and overdose if a simple and efficacious method of determining blood concentrations were available. Exhaled breath drug monitoring holds such promise.
Thus, many therapeutically effective medications that require TDM are less likely to be prescribed by physicians in view of inconvenience in constant blood sampling and lack of patient compliance. Further, in the present era of cost-effective healthcare, considerations of prescription costs have become the primary issue for all aspects of laboratory operation. Individualization of drug therapy contributes to cost-effective patient management through detection and elimination of drug side effects; detection of unusual metabolism and adjustment of dosage based on individual metabolism; and detection of unusual metabolism and adjustment of dosage based on the effects on disease.
Drug level testing is especially important in patients being administered medications where the margin of safety between therapeutic effectiveness and toxicity is narrow (low therapeutic index). In addition to TCAs, other drugs such as procainamide or digoxin, which are used to treat arrhythmias and heart failure; dilantin or valproic acid, which are used to treat seizures; gentamicin or amikacin, which are antibiotics used to treat infections and lithium which is a mainstay of treatment for dipolar disease, are examples of medications having a narrow margin of safety and therapeutic effectiveness with administration.
Currently available tests for TDM are invasive, difficult to administer, frequently require the patient to be in a health care setting (versus home), and/or require an extended period of time for analysis. Such tests are generally complex, requiring a laboratory to perform the analysis. Healthcare providers' offices rarely possess appropriate testing technology to analyze blood samples and must therefore send the samples to an off-site laboratory or refer the patient to the laboratory to have their blood drawn, which results in an extended time period for analysis. In the process of transfer to and from a laboratory, there is a greater likelihood that samples will be lost or mishandled, or that the incorrect results are provided to the healthcare provider, which could be detrimental to the patient's health and well-being. Further, those on-site test devices that are presently available for assessing drug concentration levels in blood are expensive. Reference laboratories using sophisticated techniques such as gas chromatography-mass spectrometry typically conduct complex and expensive toxicological analyses to determine the quantity of a medication.
A further problem with present methods of TDM is that the concentration in the blood may not correlate with the concentration at the “effect site”. It has been found that the concentration of drug in the blood may not directly reflect the concentrations at the cellular or receptor level, where drugs exert their biological effects. The pharmacodynamics and pharmacokinetics (PD/PK) of many drugs also exhibit wide inter- and intra-individual variation. The drug concentration at the site of action relates best with clinical responses; however, it is typically difficult or impossible to measure. Although plasma drug concentrations often provide an informative and feasible measurement for defining the pharmacodynamics of medications, they do not consistently provide an accurate report of drug disposition in a patient.
For medications appearing in breath, it appears that the concentration that appears in breath correlates best with the “free” drug in the body, that is, the drug available for the therapeutic effect, thus the concentration in exhaled breath is an excellent measure of the drug fraction that is most important for the healthcare provider to know in order to make informed decisions about dose regimens. Although the fraction of drug bound to protein and whole blood is essentially constant over a wide range of plasma and blood concentrations (i.e., free drug concentrations can be deduced from plasma and whole blood concentrations under normal circumstances) for the vast majority of subjects, various pathological circumstances can arise that make this correlation in a patient problematic (e.g., drug-drug interactions, massive blood loss and transfusion, protein losing syndromes, etc).
There are generally four processes by which drug disposition takes place: absorption, distribution, metabolism, and excretion. Absorption of a drug is generally dictated by route of drug administration (i.e., intravenous (IV), intramuscular (IM), subcutaneous (SC), topical, inhalation, oral, rectal, sublingual, etc.); drug factors (i.e., lipid solubility); as well as host factors (i.e., gastric emptying time). Alterations in drug absorption may affect the therapeutic effectiveness of the drug.
Factors related to drug distribution include body fat, protein binding, and membranes. Because lipid soluble drugs tend to dissolve in fat, drugs can build up to very high, potentially toxic, levels in a patient with a high percentage of body fat. There are several drugs available that have a high affinity for serum proteins. Protein binding limits the therapeutic effectiveness of the drug. Membranes such as the blood brain barrier (BBB) sometimes make it difficult for the drug to be properly distributed.
All tissues in the body can contribute to the metabolism of a drug. For example, the liver, kidney, lungs, skin, brain, and gut can all be involved in metabolizing a drug, although it most cases metabolism in the liver predominates. Physiologically, metabolism can increase the activity, decrease the activity, or have no effect on the activity of a drug. Because metabolism of a drug differs from one patient to another, the dosage required for a drug can differ from patient to patient.
Routes of drug elimination include the kidney, liver, gastrointestinal tract, lungs, sweat, lacrimal fluid, and milk. All of these processes (absorption, distribution, metabolism, and excretion), which can occur at varying times after drug administration, affect the level of pharmacologically effective drug in a patient. Thus, current methods for analyzing a blood sample to assess plasma drug concentrations only provides a snapshot for defining the pharmacodynamics of a drug and does not consistently provide an accurate report of drug disposition in a patient.
An example of the value of continuous or frequent breath monitor of drug concentrations is during anesthesia. Anesthesiologists use many sophisticated and expensive devices to monitor the vital signs of and to provide respiratory and cardiovascular support for patients undergoing surgical procedures. Such monitors provide the anesthesiologist with information about the patient's physiologic status and verify that the appropriate concentrations of delivered gases are administered.
Anesthesia can be achieved by using either inhalational or intravenous (IV) anesthetics, or combination of both. Inhalation anesthetics are substances that are brought into the body via the lungs and are distributed with the blood into the different tissues. The main target of inhalation anesthetics (or so-called volatile anesthetics) is the brain. Some commonly used inhalational anesthetics include enflurane, halothane, isoflurane, sevoflurane, desflurane, and nitrous oxide. Older volatile anesthetics include ether, chloroform, and methoxyflurane. Intravenous (IV) anesthetics frequently used clinically are barbiturates, opioids, benzodiazepines, ketamine, etomidate, and propofol. Currently, however, volatile anesthetics are seldom used alone. Rather, a combination of inhalation anesthetics and intravenous drugs are administered, in a process known as “balanced anesthesia.” During administration of balanced anesthesia, for example, opioids are administered for analgesia, along with neuromuscular blockers for relaxation, anesthetic vapors for unconsciousness and benzodiazepines for amnesia.
Inhalational Anesthetics
With inhalation agents, the concentration of drug delivered is metered and the variation between patients in the depth of anesthesia resulting from known inhaled concentrations of agents is relatively narrow, permitting the anesthesiologist to confidently assume a particular level of anesthesia based on the concentration of anesthetic gas delivered.
Monitors used during the administration of inhalational anesthesia generally display inspired and exhaled gas concentrations. Most use side-stream monitoring wherein gas samples are aspirated from the breathing circuit through long tubing lines. A water trap, desiccant and/or filter may be used to remove water vapor and condensation from the sample. Gas samples are aspirated into the monitor at a low rate to minimize the amount of gas removed from the breathing circuit and, therefore, the patient's tidal volume. These gas monitors continuously sample and measure inspired and exhaled (end-tidal) concentrations of respiratory gases. The monitored gases are both the physiologic gases found in the exhaled breath of patients (oxygen, carbon dioxide, and nitrogen), as well as those administered to the patient by the anesthesiologist in order to induce and maintain analgesia and anesthesia.
There are a number of techniques to monitor respiratory gases, including mass spectroscopy, Raman spectroscopy, IR—light spectroscopy, IR—photo acoustics, piezoelectric (U.S. Pat. No. 4,399,686 to Kindlund), resonance, polarography, fuel cell, paramagnetic analysis, and magnetoacoustics. Infrared detector systems are most commonly used systems to monitor gas concentrations.
A major disadvantage of conventional gas monitors is that they only determine the concentrations of certain types of gases or a limited number of gases and most do not measure N2 nor any medications delivered by other routes (i.e., intravenously). These monitors are also fragile, expensive and require frequent calibration and maintenance. For this reason, not all purchasers of anesthesia machines buy anesthesia gas monitors and therefore, rely on anesthesia gas vaporizers to control anesthetic gas concentration. Unfortunately, these vaporizers frequently go out of calibration and the anesthesiologist may administer too much or too little anesthesia.
Intravenous (IV) Anesthetics
Another method of providing anesthesia includes IV anesthetics. At present, a major impediment to the wider use of IV anesthetics, rather than inhaled anesthetics, has been the inability to precisely determine the quantity of drug required to provide a sufficient “depth of anesthesia” without accumulating an excessive amount.
Propofol, for example, is an agent that is widely used as a short acting IV anesthetic. Its physiochemical properties are hydrophobic and volatile. It is usually administered as a constant IV infusion in order to deliver and maintain a specific plasma concentration. Although the metabolism is mainly hepatic and rapid, there is significant inter-patient variability in the plasma concentration achieved with a known dose. However, the depth of anesthesia for a known plasma concentration is far less variable and it is therefore highly desirable to be able to evaluate plasma (or ideally free, unbound drug) concentrations in real time to accurately maintain anesthetic efficacy. [“A Simple Method for Detecting Plasma Propofol,” Akihiko Fujita, M D, et al., Feb. 25, 2000, International Anesthesia Research Society]. The authors describe a means to measure plasma (free) rather than total propofol using headspace—GC with solid phase microextraction. This is preferable since plasma (free) propofol is responsible for the anesthetic effect. Prior methods of monitoring propofol concentration in blood include high-performance liquid chromatography (HPLC) and gas chromatography (GC). It has been reported that 97%-99% of propofol is bound with albumin and red blood cells after IV injection, and the remainder exists in blood as a free type. HPLC and GC detect the total propofol concentration, which does not correlate as well with the anesthetic effect as the plasma propofol level. Studies of exhaled breath propofol concentrations show an excellent correlation with plasma (free) concentration and therefore are likely to better predict the effect of the drug.
Propofol may also be monitored in urine. Metabolic processes control the clearance of propofol from the body, with the liver being the principal eliminating organ. [“First-pass Uptake and Pulmonary Clearance of Propofol,” Jette Kuipers, et al., Anesthesiology, V91, No. 6, December 1999]. In a study, 88% of the dose of propofol was recovered in urine as hydroxylated and conjugated metabolites.
The aim of any dosage regimen in anesthesia is to titrate the delivery rate of a drug to achieve the desired pharmacologic effect for any individual patient while minimizing the unwanted toxic side effects. Certain drugs such as propofol, alfentanil and remifentanil have a close relationship between free blood concentration and effect; thus, the administration of the drug can be improved by basing the dosage regimen on the pharmacokinetics of the agent. [Kenny, Gavin, Target-Controlled Infusions—Pharmacokinetics and Pharmacodynamic Variations, http://www.anaesthesiologie.med.unierlangen.de/esctaic97/a_Kenny.htm]. Target controlled infusion (TCI) is one means for administering an IV anesthesia agent using a computer to control the infusion pump. Using a computer with a pharmacokinetic program permits control of a desired plasma concentration of an agent, such as propofol. The systems do not sample the blood in real-time, but use previously acquired population PD/PK parameters to provide a best estimate of the predicted blood concentration. However, even if TCI systems produced the exact target concentrations of blood concentration, it would not be possible to know if that concentration was satisfactory for each individual patient and for different points during the surgical procedure.
Among the technologies used to process and monitor electrical brain signal is BIS (Bispectral Index Monitor) monitoring of the EEG. It is an indirect monitor of depth of anesthesia. The BIS monitor translates EEG waves from the brain into a single number—depicting the depth of anesthesia on a scale from 1 to 100. In addition, neural networks have been used to classify sedation concentration from the power spectrum of the EEG signal. However, these technologies are costly and not entirely predictive.
Artificial neural networks have also been developed which use the patient's age, weight, heart rate, respiratory rate, and blood pressure to predict depth of anesthesia. The networks integrate physiological signals and extract meaningful information. Certain systems use mid-latency auditory evoked potentials (MLAEP) which are wavelet transformed and fed into an artificial neural network for classification in determining the anesthesia depth. [Depth of Anesthesia Estimating & Propofol Delivery System, by Johnnie W. Huang, et al., Aug. 1, 1996, http://www.rpi.edu/˜royr/roy_descpt.html].
An apparatus and method for total intravenous anesthesia delivery is also disclosed in U.S. Pat. No. 6,186,977 to Andrews. This patent describes a method in which the patient is monitored using at least one of electrocardiogram (EKG), a blood oxygen monitor, a blood carbon dioxide monitor, inspiration/expiration oxygen, inspiration/expiration carbon dioxide, a blood pressure monitor, a pulse rate monitor, a respiration rate monitor, and a patient temperature monitor.
Combination Inhalational and Intravenous (IV) Anesthetics
As previously stated, anesthesia can be achieved by using either inhalational or IV anesthetics, or combination of both (“balanced anesthesia”). Monitoring techniques for inhalational and IV anesthesia differ because of the nature of the drug delivery. Monitors for inhalational anesthesia delivery generally comprise systems that monitor the breathing circuit. Monitors for IV anesthesia generally comprise physiologic monitoring of the patient rather than monitoring the concentration of the drug in the blood. Based on this bifurcation of monitoring systems, anesthesiologists must utilize separate systems when switching between drug delivery methods or when utilizing a combination of methods.
Accordingly, there is a need in the art for methods to improve therapeutic drug monitoring (such as IV and/or inhalational delivered anesthetics) and the monitoring of endogenous compounds related to health conditions that are non-invasive, speedy, and inexpensive in administration. There is also a need for a monitoring system capable of continuously monitoring drug concentration levels (to assess drug disposition) and of continuously monitoring endogenous compound levels (such as glucose levels in exhaled breath). Further, there is a need for non-invasive monitoring systems capable of being used at remote locations and/or non-laboratory settings to monitor the therapeutic efficacy of the drug or to assess patient health by monitoring endogenous compounds present in exhaled breath.
Other Applications for Intermittent or Continuous Breath Monitoring
In addition to monitoring blood concentrations of licit medications using exhaled breath either intermittently or continuously, exhaled breath measurements can be used to monitor a wide range of other compounds and correlate them with blood concentrations. For instance, breath can be used to determine whether an individual has used an illicit drug. Likewise, breath can be used to determine blood glucose concentrations, thus freeing diabetics from having to perform frequent blood sticks to determine their glucose concentrations. Breath glucose can also be measured continuously in the operating room during surgery and/or the intensive care units since tight glucose control has been shown to improve wound healing and reduce the incidence of post-operative infection.
The breath may also be an excellent media to diagnose acute and/or chronic “stress” in humans, which can occur in various settings (e.g., injured humans stressed due to disease, accidents, or military actions, etc.; or non-injured humans stressed due to extreme/excessive exercise or environments that require an extremely high level of vigilance such as the longterm operation of military aircraft under battlefield conditions). Various stress markers including those suggesting inflammation, which may appear in the breath, include but are not limited to concentrations of lacetic acid, ketones, cortisol, testosterone, ATP, ADP, AMP, adenosine, prostaglandins (e.g., PGF2a), leukotrienes, cytokines, interleukins, melatonin, 6-sulfatoxymelatonin, HIF-1α, HSP70 and myogenic regulatory factors.
For example, lacetic acid in blood is an indicator of the severity of shock (hypoperfusion) and numerous disease states. It is usually measured intermittently by drawing blood samples. Intermittent or continuous breath measurements of lacetic acid could revolutionize the care of critically ill patients in the operating room or intensive care unit. Numerous other compounds can also indicate disease states appear in breath. The ability to monitor these compounds in real-time, either intermittently or continuously without the delay of having to send specimens to a laboratory, could dramatically improve the care of hospitalized or even home care or ambulatory patients.