1. Technical Field
The present invention relates in general to data processing systems and in particular to data processing systems with graphics displays. Still more particularly, the present invention relates to accurate rendering and display of three dimensional volumes utilizing a data processing system having a three dimensional graphics adapter and a high resolution display.
2. Description of the Related Art
Graphics image displays have progressed from depicting a flat, two dimension ("2D") representation of an object to depicting a 2D simulation of a three dimension ("3D") solid. Construction of 3D objects may be done through the utilization of so-called "wireframe modeling," where a wireframe model is a construct of the object utilizing lines to render the edges of an object. A surface can be constructed by shading or filling in the wireframe representation to give the appearance of a solid, three dimensional object.
Surface detail is improved by utilizing a technique called texture mapping. Texture mapping is the process of adding patterns and photo-realistic images to displayed objects. Typically, applications store descriptions of primitives (points, lines, polygons, etc.) in memory that define components of an object. When a primitive is rasterized (converted from a mathematical element to equivalent images composed of pixels on a display screen), a texture coordinate is computed for each "pixel" (short for picture element). Texture coordinates assigned to the vertices of the primitives are interpolated to calculate additional coordinates of each pixel utilized to fill the polygon. A texture coordinate is utilized by the texture mapping engine to look up "texel" (short for texture element and represents stored texture values) values from an enabled texture map. At each rendered pixel, several texels may be utilized to define one or more surface properties, such as shading or color, at that pixel.
Texture mapping hardware typically supports width, height and depth sizes that are power-of-two (a term describing data measured in units of 2.sup.n, where n is an integer in a range from zero to infinity). A texture map is basically a one, two or three dimensional image composed of elements (texels) that can have one, two, three or four components--R, G, B and A. Texture coordinates are floating point numbers between 0 and 1 and are utilized to determine the proper coordinates to begin adding texture to an object. Also, texture coordinates determine how much texture to add when moving pieces of the texture from texture memory to the computer display.
A texture mapping engine utilizes special, high speed dedicated memory and maps data contained in the dedicated memory to the graphics frame buffer. Texture memory is usually contained on a removable graphics card in the data processing system. The graphics card is limited in size since it has to fit within the data processing system housing. This limits the amount of texture memory available on the adapter due to space and cost limitations. A typical medical imaging system such as a Computed Tomography ("CT") scanner, which generates high definition three dimensional data of hard tissue in patients using multiple X-ray exposures, usually generates 32 megabytes and more of data. Magnetic Resonance Imaging ("MRI") devices, which does best in soft tissue scans, may generate the same amount of data and more per MRI scan. Because of limited fast memory, an application may break the data into eight portions of four megabytes each and load each portion, one at a time, into texture memory to process the data. Processed data is then sent to a frame buffer where the data is assembled and finally scanned by a digital-to-analog converter and sent to a high-resolution (1280.times.1024 or more) computer display.
There are drawbacks to traditional rendering, where only the surface of an object can be displayed. The interior of the rendered object, if displayed in a sectional view, is homogeneous interior details are not shown. Volume rendering (capable of displaying all contents of a displayed volume) and a 3D texture map is utilized to display information derived from the interior of the object.
Volume rendering, utilizing the proper algorithms, may be utilized to reveal interior details of a 3D image. For example, a human head may be displayed on a computer screen as a two dimensional photograph. The head may also be reproduced utilizing a wireframe representation and texture mapping to produce a simulated three dimensional surface. The photograph, as well as the simulated three dimensional solid, would reveal surface features of the head, such as hair, nose, ears, eyes, etc. However, a volume rendering of the head may be manipulated to display surface features as translucent, then reveal bone, brain, blood vessels, etc., as solid and simulated in three dimension. The resulting image has the quality of a volume composed of a mixture of materials with varying translucence and opacity.
In order to implement manipulation of a rendered simulated 3D image, each volume element ("voxel," which is similar to a pixel, with a third, depth, dimension displayed) in the volume rendering display is assigned a numerical value based on its location within the volume. A numerical value may be associated with a color and an opacity at that particular point. The numerical value may be assigned an arbitrary value between zero and one, e.g., 0.1. In the case of opacity, if the opacity scale ranked 1 as totally opaque and 0 as transparent, that particular point, 0.1, would return a reading of cloudy or nearly transparent. The set of points with equal numerical values on the volume is termed an iso surface. The iso surface value may define a specific structure in the volume, such as the cornea of the eye or a bone. Additionally, an opacity level may be arbitrarily attached to all scalar fields. If opacity for the skull was set to a low value, it would appear translucent and objects within the skull having higher opacities would be more visible. Volume rendering of the head, with operator determined opacity values, depicts boundaries where differing opacities form level surfaces depicting various objects within the head.
Volume rendering is especially useful in CT, MRI (Magnetic Resonance Imaging) and seismic scanners. Rendering engines, for handling data received from these devices, require data with dimensions that are pre-defined and usually expressed as a power-of-two (see above). This is a problem because texture data derived from the aforementioned CTs, MRIs and seismic scanners is usually not power-of-two in one or more dimensions.
In most three dimensional rendering engines, the width, height and depth of the raw data are processed utilizing dimensions with powers-of-two limitations (historical, rather than hardware or software limitations). Generally, width and height coordinates of data are supplied with power-of-two dimensions, but the depth is seldom available in power-of-two dimensions. Data may be acquired from a scan and processed for utilization by a computer in either one large block of three dimensional data or multiple slices of two dimensional data (for our purposes, three dimensional data will be discussed here). For example, sensor data received by a rendering engine may be 128 by 128 by 57 units. Utilization of the sensor data will present a problem because of the non-power-of-two dimension of the depth. The usual method for rendering an image from data having non-power-of-two measurements is to re-sample the data so that it measures 128 by 128 by 32 or 64--a pre-processing step that forces data to fit limitations of the graphics rendering engine. However, forcing data to fit limits prescribed by the graphics engine, may cause faulty representation of the raw data and artifacts.
FIGS. 3A-3C depicts an existing method of rendering non-power-of-two segments. Perspective 300, in FIG. 3A, represents a volume of raw data to be rendered. A digital data segment, comprising five subvolume elements ("voxels"), and illustrated by digital data segment 301, is representative of data that would not be processed without returning an error indication. The five voxels cannot be loaded into texture memory since voxel 306, with a size of 0.5, is not power-of-two and would produce an error. As depicted in the side view of digital data segment 301, FIG. 3B, voxels 302 and 304 are 2.0, which is a power-of-two size (2.sup.1) and voxel 306 is a non-power-of-two remainder data segment.
FIG. 3C illustrates digital data segment 301 after processing and prior to rendering. A standard method for rendering voxel 306, requires that remainder data, represented by data voxel 306, be pre-processed (or re-sampled), to fit power-of-two dimensions. As FIG. 3C illustrates, voxel 306 is pre-processed so that the volume now consists of two power-of-two voxels, 302 and 304. The pre-processed data now fits the requirement of a power-of-two dimension because the non-power-of-two voxel 306 has been reduced to nothing. Voxel 306 is essentially deleted and original raw data has been eliminated from the rendering because of the sampling (in this instance scaling) mechanism.
MRI and CT scans, as mentioned earlier, are excellent applications of three dimensional or volume rendering. CT and MRI's provide quality images of different structures. By merging the CT and MRI images, a picture of a part of the anatomy, say a person's brain, becomes more complete. A brain surgeon is able to properly plan an operation based on the merged images. However, if the data has been pre-processed to either stretch or shrink the data to fit power-of-two dimensions, critical data may be distorted or omitted. Addition, reduction and/or deletion of data, may cause major consequences. For example, breast cancer, if detected early, may be treatable. If a three dimensional image taken of a breast with a small spot of cancer is pre-processed, the spot may be removed from the volume rendering because that particular area of data is non-power-of-two. As is obvious, ramifications may be that breast cancer may not be detected and an incorrect diagnosis would be given.
It would be desirable, therefore, to provide a method of display that would accurately display three dimensional data without scaling or deleting the original raw data.