The prior Related Applications describe in detail various signal processing methods and apparatus which are useful in reducing the noise components of signals from synthesized apertures, and most particularly from signal generated by multiple transducer, ultrasonic echographic equipment. These applications hereby are specifically incorporated by reference into the present application.
The present application is directed to an apparatus and method for implementing the noise-reduction processes and apparatus of said Related Applications in a real-time, signal processing system. As used herein, the expression "real-time" shall mean the display of at least about 15 complete image frames per second. It will be understood that faster frame rates are desirable and smooth the movement of the images of moving targets or objects.
Noise-reduction processes, such as side lobe subtraction and hybrid mapping, present a substantial computing burden because they are based upon iterative techniques. For example, using a side lobe subtraction process employing the CLEAN algorithm, normally commences with the step of identifying the brightest or highest intensity data point or pixel on the entire data map or image frame. For a typical video display terminal, each scan line or row in the video display includes over 400 pixels and a typical display screen includes over 400 scan lines or rows. Thus, there are more than 160,000 pixels or data points on a display screen that must be examined for their intensity and compared to each other to find the highest intensity pixel.
Sonographic imaging usually employs polar coordinates to produce a sector-shaped image display. Thus, in the rows or scan lines of pixels close to the transducer array, the pixels at the sides of each row or line will not display data. The complete image map in sonographic imaging, therefore, can have fewer than 160,000 data points to compare for the brightest point, but it still contains a very substantial and burdensome number of data points. Moreover, in higher resolution displays, for example, VGA displays with 480 rows and 640 pixels per row, the potential number of data points increases to over 300,000.
Once the highest intensity pixel on the map is identified, noise around the brightest data point can be deconvolved or subtracted using a noise-reduction algorithm, such as a CLEAN algorithm or a maximum entropy algorithm. This noise-reduction step involves further computer burden, but the CLEAN algorithm is the least burdensome and effects substantial noise reduction. After the first noise-reduction subtraction using the CLEAN algorithm, the map is again scanned for the next brightest, highest intensity, data point and noise is deconvolved from around that data point. As many as 600 or more iterations may be employed to reduce noise in the mapped image.
This noise-reduction process, therefore, can be adapted and is readily suited for use to process image data which does not need to be displayed in real-time. For example, a single frame can be processed to provide the viewer with a snap shot of the target having significantly increased clarity and resolution. Solving the computing burden in a manner allowing real-time displays, however, is much more difficult, particularly if a computer or signal processing apparatus of modest size and cost are to be employed.
The apparatus and method of the present invention and particularly well-suited to sonographic and echographic applications where real-time display is more important. The present process and apparatus may have application in other areas, and by describing the preferred embodiment in relation to ultrasonic imaging, it is not meant to limit the present process and apparatus.
Briefly, by way of further background to the preferred embodiment, the basic noise-reduction problem found in sonography can be described. Additional detail and theory are found in the Related Applications.
Sonography, as well as other types of imaging, uses a plurality of transducers to illuminate a physical imaging space and to receive return echo signals from features within that space. The transducers used are not perfectly directional. That is, for any given pointing angle and moment in time they illuminate and receive echoes from some three-dimensional volume of the imaging space, rather than an ideal point in space. The fact that the transducers see a volume, rather than a point, limits their resolution to physical features that are larger than the volume that the transducer sees at a particular moment in time.
A sonographic image is commonly displayed in polar (r,.THETA.) coordinates in two dimensions, with one dimension being angular (related to the transducer pointing angle) and the other being radial (related to distance from the transducer). In current sonographic imaging systems the radial resolution is quite good compared to the angular resolution. The angular resolution consists of two components, one of which is "into the image" in a two-dimensional display and thus cannot be seen in the two-dimensional presentation. The "into the image" dimension can still degrade the image, but the resolution of commonly used sonographic transducers is considerably better in that "into the image" dimension than in the dimension that can be seen.
Consider a single sonographic transducer that has a directional sensitivity pattern 300, as shown in FIG. 1A. This transducer "sees" in three directions for any given pointing angle. Its directional sensitivity is referred to as its "beam pattern" or "point spread function," which may be schematically represented as a three element pattern with a main lobe 301 and two equal side lobes 302 that are 0.5 times as sensitive as main lobe 301. FIG. 1B shows a small physical feature or target object 303. If physical feature 303 is examined by three side-by-side transducers having beams 300a, 300b and 300c (FIG. 1C), the image resulting can be seen in FIG. 1D.
Two "ghost" features 304, observed to the right and left of the central bright point 305, are the result of the side lobes 302a and 302c of the adjacent transducers seeing actual feature 303, while central transducer main lobe 301b creates central bright point 305 in FIG. 1D. Side lobe subtraction using the CLEAN algorithm is one means whereby an image that is degraded by an imperfect transducer, or by a synthesized aperture formed by a plurality of transducers, may be improved when the transducer's or the array of transducer's directional pattern can be determined or reasonably approximated. If the transducer or array beam pattern is known, and an image such as FIG. ID is observed, then the ghost figures may be removed from the image. A side lobe subtraction of the ghosts can be accomplished with an algorithm such as the CLEAN algorithm, which process is also known as "deconvolution" of the noise or "beam subtraction."
A real image is much more complex, since it has many physical features, and the beam element pattern is much more complex, since it is formed by a large array of transducers. The simple ghost features, therefore, become a generalized image blur. The CLEAN algorithm uses an iterative process on the real image, also called the "dirty image" to progressively remove the blurring. The CLEAN algorithm finds the brightest pixel in a dirty image and assumes that there is a real physical feature at that location. It then deconvolves a percentage of the beam pattern (for example, 30 percent) out of the dirty image centered on that bright pixel. The CLEAN algorithm stores where the bright pixels were found and what their intensities were (times the same percentage) in order to reconstruct the image when the deconvolution phase is complete.
When the deconvolution phase is complete, the original dirty image has been reduced in intensity to some noise floor (which may be 0) and a second image (comprised of the "clean components"), consisting of found bright pixels and their intensities times the percentage or gain. The noise floor or remainder usually is not used to reconstruct the image.
The clean component's image can be viewed and recognized as a deblurred version of the original dirty image. However, the clean components image generally has a grainy appearance and is often unsatisfactory for use in that form. The preferred final step in the CLEAN algorithm, therefore, is to smooth the clean component image by convolution with a beam element pattern that renders the image more pleasing to the viewer. This final image in which the beam element pattern has been convolved back into the clean components is called the "clean image."
The clean image may contain some features that are not actually physical features, called "artifacts." These artifacts can be reduced or eliminated by a procedure termed "masking." Masking is not part of the CLEAN noise-reduction algorithm itself, but is an additional step after noise-reduction processing. When noise reduction is complete, the clean image is multiplied, on a pixel by pixel basis, by a copy of the original dirty image that was saved for this purpose, and is divided by a scaling factor, in order to enhance the contrast of the resulting masked image frame. The final result is a cleaned and masked image. The masking process is described in much greater detail in Related application Ser. No. 07/815,509, previously incorporated herein by reference and it will not be set forth in detail herein.