Demonstration of instrument dynamic range in mass spectrometry generally involves the injection and instrument response observation of a single analyte over several orders of magnitude in concentration. As an example, demonstration of six orders of magnitude in dynamic range may involve performing seven analytical GC runs in order to evaluate the instrument response using an analyte spanning six orders of magnitude in concentration. Following the sequence, it may be a serious challenge to re-characterize the system in a timely manner since the high order standards are persistent in the system and may be difficult to remove completely in a timely manner. A lengthy process is required which may involve cooling of the inlet, ion source and transfer line, followed by replacement of an inlet liner, septa, o-rings and a bakeout of an autosampler syringe. A waste vial may also need to be solvent stripped and baked out prior to re-use in order to prevent analyte run to run carryover. In addition, proper dilution across several orders of concentration is a tedious process and prone to error without using and executing a proper technique. Purchasing certified standards at each level of the test may add considerable cost to the process.
Installation validation for GC-MS often includes analytical testing which may include diagnostic requirements for instrument detection limit (IDL), dynamic range, mass range, mass accuracy, mass stability or any number of requirements set forth by an end user. Demonstration of instrument sensitivity for GC-MS or GC-MS/MS instruments is often performed upon installation utilizing octafluoronaphthalene (OFN) as an analytical test standard. OFN is somewhat unique in offering a high instrument response for a given mass of analyte. This is due in part to the monoisotopic nature of fluorine, the stable polynuclear aromatic structure of the molecular backbone and the high degree of volatility for its molecular weight. This allows OFN to elute early in a chromatographic GC run when other compounds of similar molecular weight still reside on an analytical GC column. Using OFN allows sensitivity criteria based on the molecular ion, mass to charge ratio (m/z or m/e) at 272 Daltons (da) to be used which is largely devoid of background chemical noise.
Determining instrument response characteristics of an analytical tool used for quantitative purposes is fundamental for proper use of the analytical tool. Methods for target compound quantitation often employ a calibration routine in which target analytes of known concentration are analyzed across a predetermined working range of the instrument. Typically, multiple compounds of interest also known as target compounds or analytes are composited in solution at a known concentration. A fraction of this solution, also known as a primary standard, can then be diluted to yield several other solutions with known dilutions. For example, a fraction of a primary solution containing 100 ng/ul of 100 analytes can be diluted two fold, to yield a second standard containing 50 ng/ul of the same 100 analytes. This second standard can subsequently be diluted in the same fashion in order to yield a third 25 ng/uL standard. This process can be repeated as desired in order to yield a set of standards which subsequently, can be individually analyzed. A response curve may be generated for each analyte which correlates instrument response with analyte concentration. An unknown sample may be analyzed, and a quantitative assessment of each analyte of interest in the unknown sample made. The quantitation may be based on a calculated response factor RF, an average response factor RF, a relative response factor RRF (relative to a fixed concentration internal standard), or average relative response factor RRF. Additionally, quantitation may be based on an interpolation between neighboring calibration points or an extrapolation from two or more data points. Instrument response may be based on a total ion current (TIC) an extracted ion current profile (EICP), a product ion intensity or peak area of a selected reaction monitoring (SRM) transition, a GC detector such as a flame ionization detector (FID), electron capture detector (ECD) or other chromatographic peak detector. Such methods are well known in the art.
Although non-linear quantitation techniques can in some cases be used, most often it is desired to limit quantitation within a known linear working range of the instrument in order to minimize quantitation errors. This linear range may be defined in terms of a linear fit such as an R2 value or as a maximum deviation such as a % RSD (relative standard deviation) value of the response factors. For many problematic compounds, the response falls off rapidly at lower levels. This may be due to irreversible adsorption, active sites in the chromatographic flow path, etc. Often times the response may roll off at the high end of the calibration curve. This may be due to using too high of an electron multiplier gain for example in mass spectrometer applications. Since response factors can vary significantly between analytes of interest, often times it is desirable to define one working range for one analyte versus another working range for a differing analyte. Since many standards are typically involved in a calibration routine, much time is spent simply calibrating the instrument before any samples can be run. Additionally, serial dilutions of analytical standards can be prone to cumulative error unless a proper technique is employed.
Often times the range over which an instrument is calibrated is set by a methodology employed, and demonstration of linearity within this range may need to be re-generated as part of a laboratory standard operating procedure (SOP) or as part of a regulated method. In general, a calibrated range defines a “working range” of the instrument and calibrations outside of this range are not undertaken. This is due in part to a lengthy process which would be involved in running so many calibration points, but it is also because it is very difficult to run a blank following high-concentration calibration runs. When high levels of analytes are run at upper limits of detection, achieving an acceptable blank run will nearly always require changing chromatograph components such as septa and injection port liners (in the case of GC), flushing autosampler solvent waste vials and syringes and often necessitates baking of such components at high temperature.
It is often desirable to assess a fundamental linearity of a detection system outside of problems encountered by poor chromatography such as irreversible adsorption, decomposition of analytes, etc. This may be referred to as a linear dynamic range of a detector, for example, in a mass spectrometer detection system. It may occur for example as part of an instrument qualification procedure (IQ) wherein the inherent linear dynamic range of a new instrument may be validated. In such a case, a non-problematic analyte may be analyzed near the lower detection limit, and again at increasing concentrations towards an upper limit where detector non-linearity or saturation occurs. As instrument sensitivity and linear dynamic range have improved, a significant problem has arisen. Modern instruments may have a linear dynamic range of around six orders of magnitude or more. Assessment of the full linear dynamic range in such a system if done in a conventional manner may not only be time consuming from the perspective of needing to run many standards, it also should be done successfully the first time otherwise the aforementioned exercises of, for example, changing parts, flushing, baking and blanking, etc., may have to ensue.
Methods have been devised for analyte quantitation which do not involve running a multitude of calibration standards. Instead, positional isomers of a compound are composited together in a single standard which contains differing isomeric forms at concentrations which bracket a targeted working range. Being isomers, the physical properties as well as the mass spectra are very similar. In like manner, so are the instrument response factors. One such method entitled “Instrument Performance Standards: A new Concept for fast Routine Performance Checks and Method Development in GC/MS Analysis of Dioxins and Furans” (online at: https://tools.thermofisher.com/content/sfs/posters/PSDFS1-DirkK-InstrumentPerformanceStandards.pdf) describes such methodology. In this method, a set of six differing tetrachlorodibenzodioxins (TCDD isomers) are composited in a single standard at 2, 5, 10, 25, 50 and 100 fg/uL. This constitutes a standard covering a calibration range of 50 fold. This standard may be used to validate linearity over this working range using a single injection. In general, the actual linear range of an instrument is far in excess of 50 fold and can approach one million fold or more. The above methodology while suitable for demonstrating sensitivity and linearity over a limited concentration range, suffers in applicability over wider ranges due to co-elution effects of various additional isomers, isomeric purity, and analyte carryover.