Ultra-wideband (UWB) is a term for a classification of signals that occupy a substantial bandwidth relative to their centre frequencies (for example, according to the Federal Communications Commission (FCC) Rules, UWB signal is a signal whose fractional bandwidth (the ratio between its bandwidth to its center frequency) is equal to or greater than 0.2, or its bandwidth is equal to or greater than 500 MHz.). Their very high bandwidth allows UWB based radar systems to obtain more information about targets, and makes it possible to build radar with better spatial resolution compared to conventional radar. Therefore UWB radar devices are widely implemented for radar-based imaging systems, including ground penetrating radars, wall and through-wall radars, surveillance and medical imaging devices, etc. These applications require effective volume visualization based on obtained signal.
The terms “volume visualization” and “image reconstruction” used in this patent specification include any kind of image-processing, volume rendering or other image-reconstructing computing used to facilitate displaying three-dimensional (3D) data on a two-dimensional (2D) image surface.
The problem of presenting 3D data on radar image display has been recognized in prior art and various systems have been developed to provide a solution, for example:
U.S. Pat. No. 5,061,935 (Chogo et al.) discloses a three-dimensional display radar, which comprises a radar transmitter-receiver unit for producing signals indicative of information about a bearing of a target, information about distance to the target and information about the received power reflected from the target, respectively, a display, a three-dimensional coordinate converter for converting the bearing information, the distance information and the received power information into signals indicative of both an X-Y coordinate and the height of the target, marker generator for generating range marker signals when the target is three-dimensionally represented on a screen of the display, and a video memory having storage addresses corresponding to respective pixels on the display and for storing the received power information at a storage address corresponding to the X-Y coordinate obtained from the three-dimensional coordinate converter and storing therein the marker signals generated from the marker generator, the video memory being further adapted to successively read the received power information and the marker signals so as to deliver image signals to the display.
U.S. Pat. No. 5,280,344 (Witlin et al.) discloses the use of color to exhibit processing results from radar, sonar, spectral estimation, seismic profiling, radio astronomy, bio-engineering, and infrared imaging. The use of color for such raster display presentations has been limited to the coding of amplitude values for a fixed set of hue/luminance colors to convey recognition by a human operator. Hue and luminance are used here independently to convey two orthogonal pieces of low signal-to-noise sensor information simultaneously to an operator for quick and accurate recognition. The net result is an added degree of freedom available on a single display surface, which not only improves operator recognition and reaction time for critical events, but precludes the necessity of a second display presentation for the alternate information and subsequent correlation of two data sets by visual comparison. This invention discloses a system to generate and add a new color dimension, a fourth orthogonal axis to the presented data, in addition to position and luminance levels of a video display. The process adds information independent of the usual gray scale as saturated colors on a monotonic wavelength scale from red to green to blue.
U.S. Pat. No. 5,339,085 (Katoh et al.) disclose a radar display converting a radar signal to radar image information expressed in a three-dimensional orthogonal coordinate system with horizontal, vertical, and depth coordinates related to a certain viewpoint. Terrain and target information is converted to the same coordinate system and combined with the radar image information, producing a realistic three-dimensional display. Clipping is performed in a depth direction to eliminate portions of the radar image disposed behind terrain or target images. Terrain and target images disposed behind the radar image are reduced in intensity, but not clipped. Perspective projection and zoom transformations may also be carried out.
U.S. Pat. No. 5,793,375 (Tanaka) discloses an image processing apparatus for forming a high-quality surface display image at high speed. From raw data input by a medical image diagnosis apparatus, gray-level volume data and binary volume data in which a region-of-interest is extracted by binarizing are obtained. The binary volume data is subjected to ray-casting and projected onto a screen. A depth image formed of pixels on the screen and a distance between the screen and the surface (surface voxel) of a display object is obtained. The coordinates of the surface voxels are calculated from the depth image. Surface normals are obtained from voxel values of the gray-level volume data and a voxel values in the vicinity, and a shaded image is formed on the basis of surface normals.
U.S. Pat. No. 6,198,428 (Chogo) discloses three-dimensionally designed display radar in which a two-dimensional image data and three-dimensionally designed image data are depicted in combination in a video memory by the aid of an image controller, and they are simultaneously depicted on a screen of a display unit.
U.S. Pat. No. 6,212,132 (Yamane et al.) discloses a three-dimensional radar apparatus comprising a radar transmitting/receiving unit, a three-dimensional polygon-generating unit, and a three-dimensional graphics unit, wherein a radar transmitter/receiver outputs signals concerning orientation information, distance information, and reception intensity information on the basis of a radio wave reflected from a target, and a scan converter is used to convert the signals concerning the orientation information, the distance information, and the reception intensity information into two-dimensional radar image data composed of two-dimensional rectangular coordinates and brightness information of each of picture elements. The two-dimensional radar image data is also inputted into the three-dimensional polygon-generating unit to perform polygon-generating processing on the basis of the two-dimensional rectangular coordinates and the brightness information of each of the picture elements. Three-dimensional radar image data is prepared in the three-dimensional graphics unit on the basis of obtained polygon-based information, and it is accumulated in a frame memory. Thus, a three-dimensional radar image is displayed on a display device.
U.S. Pat. No. 6,571,177 discloses a single display providing visualization and interpretation of subtle structural and stratigraphic features of the 3-D data volume. Three substantially parallel surfaces are selected in a 3-D data volume and values of a seismic attribute on each of the three surfaces are encoded onto a Red-Green-Blue (RGB) color scale. The displayed seismic attribute may be the amplitude or one of many commonly used attributes. The 3-D data volume may be defined in terms of seismic times or in terms of seismic depths.