Three-dimensional visualization has gained popularity in medical applications since the introduction of computer tomography (CT) in the field several decades ago. For example, three-dimensional visualization is also used in magnetic resonance (MR) imaging. Using three-dimensional data sets in ultrasound imaging is not as popular due to two major obstacles: first, data in most cases are acquired by free-hand B-mode scans that do not provide sufficiently accurate information to enable precise positioning of individual two-dimensional scans (slices) into a common three-dimensional coordinate space; second, the ultrasound data are inherently more noisy than CT and MR data sets, and therefore traditional surface visualization techniques do not produce good results. The last decade has brought many advances in technology, in both hardware and software, that allow for real-time three-dimensional data set visualization using so-called volume rendering that goes directly from a three-dimensional data set into a two-dimensional image display, bypassing the creation of surfaces. One volume rendering technique is known as maximum intensity projection (MIP). The MIP technique involves projection of three-dimensional data intensity values onto a two-dimensional image plane by assigning to each image pixel the maximum intensity value in the three-dimensional data volume that belongs to the line of sight that goes from the eye point through this pixel and into the volume. This method, in combination with animation, can produce true three-dimensional impressions on the monitor. A more computationally demanding technique is known as compositing. This technique involves modeling of a physical phenomenon of light propagation in semi-translucent/semi-opaque media that is recreated from a three-dimensional data set with the addition of specially designed transfer functions.
Some medical applications involve acquiring three-dimensional volume data by a transducer that rotates around an axis orthogonal to the transducer array. The volume "swept" by these two-dimensional B-mode scans represents a cylinder. Since the two-dimensional scans do not lie parallel to each other, it is difficult to visualize three-dimensional object structures from individual scans alone and a volume visualization technique would be desirable. Contemporary software and hardware are efficient in volume rendering techniques, but require that the data be represented as a rectilinear three-dimensional data array. Therefore, conversion from a cylindrical coordinate system to a rectilinear coordinate system is required. Although this conversion is not difficult to compute, an important practical complication to the conversion process is that there is always some offset of the axis of rotation relative to the sensor array middle point. Need exists for a method of calculating this offset based on one three-dimensional volume scan. The offset information computed can be used in an algorithm to convert from a cylindrical coordinate system to a rectilinear one and also can be used in the transducer manufacturing process to position a transducer array exactly at the rotational axis.