1. Field of the Invention
The present invention relates generally to magnetic resonance spectroscopy (MRS), as used in the field of medicine for the examination of biochemical or metabolic processes in the human body. The present invention relates in particular to a method for determining and correcting the frequency drift of the resonance frequency during the acquisition of the image series of an individual spectrum.
2. Description of the Prior Art
Magnetic resonance spectroscopy (MRS) is based, as is magnetic resonance tomography (MRT), on the phenomenon of nuclear spin resonance, discovered in 1946, which was first used in basic research to measure the magnetic properties of nuclei. When it was first noted in the 1960s that the magnetic resonance signal (MR signal) of a nucleus is also influenced by its chemical environment, and that this chemical shift can be used to characterize chemical substances, high-resolution NMR in the test tube was established. This has been successfully used ever since in physical, chemical, biochemical, and pharmaceutical research and for the analysis, i.e., structural analysis, of complex macromolecules.
In the early 1980s, it was discovered that, due to its dependence on its chemical environment (water-containing tissue, or fatty tissue), the magnetic resonance signal can form the basis for a non-Invasive imaging technique, which, as magnetic resonance tomography (MRT), today continues to be one of the most important examination methods in the field of medicine.
However, it was not overlooked that the imaging signals in magnetic resonance tomography additionally contain chemical information that can be evaluated for the investigation of biochemical reactions or of metabolic processes in a living body. This spatially resolved spectroscopy in the living organism or in the living organ was named “in vivo spectroscopy” (MRS), or “clinical magnetic resonance spectroscopy” in contrast to high-resolution NMR in the test tube, which generally takes place in the laboratory, or in contrast to magnetic resonance tomography (MRT) used purely for imaging.
In the following, the basic physical principles of nuclear spin resonance are briefly explained:
In both MRS and MRT, the subject to be examined (patient or organ) is exposed to a strong, constant magnetic field. This causes the nuclear spins of the atoms in the subject, which were previously oriented randomly, to become aligned. Radio-frequency energy can now excite these “ordered” nuclear spins to a particular oscillation (Lamor precession of the magnetization as a macroscopic quantity). In both MRT and MRS, this oscillation produces the actual measurement signal, which is acquired by means of suitable reception coils. Using non-homogenous magnetic fields, produced by gradient coils, the measurement subject can be spatially coded in all three spatial directions, which is called “spatial coding” in MRT or “volume excitation” in MRS.
The acquisition and organization of the data in MRS/MRT takes place in k-space (frequency domain). The MR spectrum or the MRT image, both in the image domain, are linked with the measured k-space data by means of Fourier transformation.
The volume excitation of the subject takes place by means of slice-selective radio-frequency excitation pulses in all three spatial directions. Generally, these are three sinc-shaped, Gaussian-shaped, or hyperbolic RF pulses that are radiated into the examination subject simultaneously in combination with rectangular or trapezoidal gradient pulses. The radiation of the RF pulses takes place using RF antennas.
By the combination of these pulses, a frequency spectrum in the range of the resonance frequency that is specific for a type of nucleus is radiated into a defined region, which generally is cuboid, of the subject being examined. The nuclei in the selected region (voxel of interest, or VOI) react to this excitation with electromagnetic response signals, which, in the form of a sum signal (FID signal) are detected in a special receive mode of the mentioned RF antennas. Through the switching of an analog-digital converter, the analog signal is sampled, digitized, and stored in a computing unit or is Fourier-transformed, so that a spectrum can be represented on a visualization unit (monitor).
Each type of atomic nucleus has a specific constant (gyromagnetic ratio γ) that defines the resonance frequency of the nucleus type in a given magnetic field according to the relation
  v  =            γ              2        ⁢        π              ⁢                        B        ~                  and on the basis of which it can be recognized in a given magnetic field. In medical technology, magnetic basic fields of 0.5–3.0 Tesla are standardly produced, while analytical NMR uses fields up to 19 Tesla, though with much smaller magnets.
Thus, protons (i.e., individual unbound hydrogen nuclei, 1H) in a magnetic field having strength 1.5 T emit signals at 63.8 MHz, while carbon—13 nuclei (13C) display resonance at 16.1 MHz, and phosphorus—31 nuclei (31P) display resonance at 26 MHz. The signals of the different types of nuclei therefore can be clearly separated from one another, and it makes sense to designate the respective experiment as proton spectroscopy, 13C spectroscopy, or phosphorus spectroscopy.
The chemical environment of an atomic nucleus, in particular the bonding electrons, causes minimal changes of the magnetic field strength inside a molecule (designated above as “chemical information”), and thus causes variations—very slight but measurable—of the resonance frequencies of identical atomic nuclei in the Hz range. If the response signals of a substance located in an externally homogenous magnetic field are sorted by frequency and plotted, there results on the abscissa a spectrum of different chemical shifts δ, and thus of different molecules.
This shift δ is indicated in millionths of the resonance frequency (ppm=parts per million), according to the formula
  δ  =                    v        substance            -              v        0                    v      0      and is thus independent of the magnetic field strength. Nonetheless, magnetic resonance spectra are dependent on the magnetic field strength of the basic field, because higher field strengths both separate the individual resonances better and also yield a better signal-to-noise ratio (SNR). Most spectroscopy-capable MR systems in clinical use operate at 1.5 to 3 Tesla. Equally as important as the magnitude of the magnetic field strength are its homogeneity and stability, in order finally to enable measurement of frequency differences of 1 Hz at a basic frequency of 63.8 MHz (1H, or hydrogen).
As mentioned above, clinical MR spectroscopy is understood as meaning MR spectroscopy of living patients, which, often as a supplement to MR tomography, supplies more detailed information concerning the metabolic composition of the tissue being examined, and enables in vivo examinations of metabolic processes in the human being. In clinical MR spectroscopy, a wide range of metabolites (products resulting from metabolism or converted in the metabolic process) are detected whose existence and concentration can provide information about neuronal functionality, metabolic changes and pathological changes in the brain, muscle tissue, and other organs.
Due to the low concentration of the metabolites, limits exist for the volume excitation depending on the type of nucleus, duration of exposure, and the organ. Typical measurement volumes in 1H MRS are approximately 2 cm3, in 31P MRS approximately 30 cm3, and in 13C MRS even more than 30 cm3. For the recording an information-rich evaluable spectrum with a correspondingly high SNR, a large number of sequence passes, i.e., a large number of successive individual measurements that are subsequently summed, is often required. Usually, this is up to 500 measurements, lasting several minutes overall.
During this comparatively long acquisition time of up to several minutes, the individual spectra to be recorded are exposed to external influences (e.g., hardware imperfections, temperature changes of the electronic components used), which can cause a change of the resonance frequency of up to a few Hz per hour; this can have a significant, possibly negative influence on the quality of the overall spectrum as the mean value of the individual spectra.
In addition, particularly in proton spectroscopy (1H MRS), there is the additional factor that the dominant water signal of the cellular tissue, which is ubiquitous and present in a high concentration, is suppressed by a special acquisition sequence in order to make visible the considerably (by one to two orders of magnitude) weaker signals (e.g. creatine, choline, carnitine, etc.) that are distributed over a range of several ppm. A standard method for known as water suppression is the CHESS technique (CHEmical Shift Selective Saturation, also called 3-pulse suppression), in which the nuclear spins of the water molecules are first selectively excited by 90° RF pulses and their cross-magnetization is subsequently dephased through the switching of magnetic field gradients (in all three spatial directions: x-, y-, and z-gradients). For an immediately subsequent spectroscopy method (for example an immediately subsequent volume excitation), in the ideal case there is thus no longer any detectable magnetization of the water molecules. In reality, a slight residual water magnetization remains, which is tolerable in the context of the signal-to-noise ratio (signal level of the 1H metabolites of interest relative to the baseline).
A first approach in the prior art, in order to enable a shift of the individual spectra due to frequency drift in 1H spectroscopy to be taken into account, is to carry out the water suppression in such a way that a significant water signal (in the form of a peak in the spectrum) remains from which information can be derived for frequency shifts. A disadvantage of this method is that metabolites close to water are situated at the broad foot of the water line, and additional post-processing steps are required to recreate at least the visual impression of an MR spectrum (peaks on a horizontal baseline). A further disadvantage of this method is that when there is a drift of the system frequency the quality of the water suppression is also adversely affected; this method thus is not very robust.
Some researchers have proposed that, after a defined number of repetitions during which the frequency shift can be neglected or linearly interpolated (sequence packet e.g. after the acquisition of 10 individual spectra), a single measurement should be carried out in the form of a reference scan, through which exclusively the exact frequency position of the water signal is determined. Such a sequence series is carried out until a usable spectrum has been obtained. The respective reference measurements supply a basis on which all the repetition cycles (sequence packet) can be corrected relative to one another. A disadvantage of this method is the increased time requirement for the additional reference measurements, which in the end makes this method unattractive.
It has also been attempted to minimize the overall measurement time of the spectroscopy measurement in order to keep the influence of frequency changes as small as possible. However, this results in strong saturation effects, which ultimately further worsens the signal-to-noise ratio, which is already poor due to the shortening of the measurement time.
In sum, the problem of frequency drift correction in 1H spectroscopy currently has not been satisfactorily solved.