Titration of a drug is commonly used by clinicians to achieve a desired effect. In general, variability in patient response to titrated drugs may be expected because an identical amount of drug may produce widely dissimilar effects in different patients. During a typical titration, therefore, clinicians may give an initial dose of a drug and observe a patient's reaction. If the desired effect is not achieved within an expected time frame (e.g., if a dose is too weak), additional increments of the drug may be administered. Each additional administration may be followed by an observation period until the desired effect is ultimately achieved. The natural variability of patient response to drugs has maintained titration as a time-honored process in the armamentarium of the clinician. The traditional titration process, however, is time-consuming and labor-intensive and may be vulnerable to human error.
When a clinician performs a painful procedure for a patient, administration and careful monitoring, or supervision of administration, of sedative and/or analgesic agents may be required. Thus, the clinician may often be physically and/or cognitively multi-tasked, thereby potentially increasing the risks of mistakes.
The traditional manual titration process may be multi-stepped and may generally be summarized as follows: (a) selecting an initial, conservative bolus dose of a given drug, based on, among others, the patient's demographic data such as age, gender, weight, height, from, among others, personal memory, a manual and drug insert, (b) delivering an initial bolus of a given drug, (c) waiting a certain time period before assessing an effect or effects of the administered drug, (d) assessing the effect or effects of the drug (possibly in the absence of equipment to objectively and consistently monitor the patient's physiological or clinical parameter(s) affected by the drug), (e) if required, selecting the size of a supplemental bolus to deliver, (f) manually delivering the supplemental bolus of given drug and (g) repeating steps (c) to (f) as required.
On the other hand, computer-controlled drug delivery systems may essentially take clinicians out of the “cognitive loop” of decision making with regard to drug administration. This “all or nothing” aspect of entirely computer-controlled drug delivery systems has hindered the acceptance of these systems by clinicians.
Rate controlled infusion (RCI) describes an infusion mode whereby clinicians define an infusion in terms of volume or mass of drug per unit time or, when normalized to patient weight, in terms of volume or mass of drug per patient weight per unit time. Generally, when using RCI, clinicians will give a loading dose infusion at a higher infusion rate to rapidly attain a desired drug level within the patient's body for a short period of time and then lower the infusion rate so that the desired drug level is maintained.
Target controlled infusion (TCI) allows clinicians to work in terms of target or effect site concentrations (ESC) instead of actual infusion rates. TCI algorithms use a pharmacokinetic (PK) model to predict target or effect site concentrations of a given drug at a given site in a patient with given demographic data such as weight, height, age and gender. Therefore, the infusion rate time profile or waveform in a TCI infusion is not constant as in RCI but generally varies with time to attain a desired target concentration.