Rapid advancements in imaging technologies have led to the need for digital images of ever-higher precision acquired at high speed. This need is particularly acute for technical applications, such as computed tomography and radiology, magnetic resonance imaging, teleradiology, digital cinematography and the like, where analog images are rapidly scanned by or converted into some form of television signal or image before digital conversion and storage.
The conventional process for conversion of analog television images into digital images is straightforward. Each scan line of the television image is periodically sampled or "clocked" at a rate sufficient to produce the desired resolution. Each sample is converted to a unique binary code representing the pixel value for that sample (the samples are said to be "quantized") through an analog-to-digital (AD) converter, and the samples are then stored or displayed in a format representative of the original analog image. If the images are sampled at a sufficiently high rate and the quantizing is sufficiently accurate while the images are displayed in rapid succession or updated frequently, then this conventional process appears adequate for ordinary viewing by the forgiving human eye. However, if high-precision, high-fidelity images are to be scrutinized for technical application such as computed tomography and radiology or color imaging, then the conventional process has been plagued with hitherto inadequately resolved difficulties.
Precision digital conversion of analog images requires not only that the analog values be accurately converted but also that each sample of each line should occur at exactly the same points as along the previous line, otherwise the spatial and tonal representation of the image will be corrupted. This means that the sampling clock must be extremely stable and sample on the same phase as the previous line and at exactly the same time relative to the start of the line (each line in a television image "starts" with a synchronizing pulse). The necessity for these timing requirements is illustrated in FIGS. 1A-1E, where an analog television image 4 is shown scanning the image of two bars, one white (W), and the other gray (G). Three lines 1, 2, 3 are shown in the composite image waveform 5, producing the analog image at 8. The vertical synchronizing pulse is shown at 6, while the horizontal pulses are shown at 7. A conventional prior-art digital sampling clock is shown at 9, which converts the composite waveform 5 into the digital image 10. This sampling clock 9 has a constant frequency and therefore suffers a phase sampling error 12 because samples may be taken early or late relative to the previous or successive line. If the samples are taken on the falling edge of the clock cycles, for example, then it is easily seen that the sample at 11 is correctly taken on the falling edge of the clock 9. However, then the sample 12 is erroneously taken late, since the clock 9 is rising at the same point in line 2 of the composite image waveform 5 as it falls in line 1. By the third line 3 the clock has returned into correct relative phase 13 and the sample is taken at the correct time. The result of the phase disparities in the clock 9 produce the corrupted image 10, where the second line is shown spatially displaced from its correct position. This problem is present in all conventional sampled television systems.
The usual solution for this problem is to produce images using very high-frequency sampling and then to average the results, with the hope that the errors will go unnoticed. It will be seen that as the sampling rate is increased, the visible increment of the error will be decreased--but it is always present in conventional systems. This produces visually "adequate" images suitable for ordinary viewing because the images are constantly moving and updated rapidly. However, such images cannot be considered accurate images or used for highly demanding applications, such as computed image reconstruction, or machine vision and the like, particularly where the images are to be used for some precise comparative purpose. This is because the phase disparities introduce inappropriate values or "phase noise" into the image. If the samples are averaged, then the gray-scale values and spatial relationships lose precision; if the samples are not averaged, then they are potentially in wrong spatial locations producing inappropriate tonal values. In either event phase noise is a very pervasive image detriment.
Even aside from the burdens of high-frequency design, these excessively high sampling rates produce a torrent of other problems: great amounts of largely redundant data must be handled by the computer and then the data must be reconfigured into a proper image format. All of this takes time and greatly slows down the imaging process.
To conserve computer resources, these large, "oversampled" images are often greatly compressed, and high compression ratios mean that small details are sacrificed, while slowing the process still further. When the images are then reconstituted and expanded, the fine details are lost, the very purpose of their creation having been compromised since the details are missing, and some artifacts have appeared. In this connection, it is interesting to note that images with phase noise always compress less successfully than images without phase noise. The reduction of noise in digital images for broadcast is extremely important because such digital images are encoded in an MPEG (Motion Picture Experts Group) format which encodes the images based upon the differences between images. Of course, it is therefore imperative to assure that the difference between images is predominantly related to scene content and not noise content or the image quality suffers greatly. Compressed images with phase noise are larger because the artifacts make the image more complex. In all events, the images are unsuitable for precise scrutiny, and it turns out that the oversampled "high resolution" conventional images are not actually higher resolution, but simply higher data, because the high sampling rate serves only to mask image errors. The high sampling rate produces enormous images with "empty" resolution which does not improve the quality of the image record. This renders conventional digital images unsuitable for many applications and is a primary reason that digital radiology, for example, has been disappointing to many physicians.
The necessity for phase coherence of the sampling clock is illustrated at 10 of FIG. 1D, where the sampling clock with a phase discrepancy on successive horizontal lines of the image has produced a corrupted image. Portions of the image (the second line of image 10) are shown with spatial corruption--shifted right or left--because samples of the original image were taken at different times relative to adjacent lines. Very high sampling rates will make the discrepancy less apparent, but even if the image lines are shifted to accommodate, the image can still never be accurate because inappropriate portions of the image were sampled, thereby also corrupting the tonal values in the image.