The usage of medical imaging devices to diagnose and plan treatment for various internal ailments is well known. Often, an imaging device such as an X-ray device, Computer Tomography (CT), or Magnetic Resonance Imaging (MRI) device is used to generate one or more initial scans or images of the area of interest. Typically, once an image has been acquired, critical structures (e.g., regions or organs) disposed in the target area are specifically identified and marked so that treatment may be optimally directed. This process may be referred to as “segmentation.” Segmentation of 3D medical images is a very common task in radiotherapy treatment planning, and is used for defining important structures in the image such as organs or tumor volumes. Structures may be readily apparent in the image (e.g., a bladder in a pelvic image), or they may not—for example, a therapy planning structure within certain regions of the image not having any corresponding anatomy. Although there are algorithms for automatic detection of some structures, much of the segmentation work still has to be done manually, because not all structures can be found automatically and most automatically detected structures require manual correction.
Usually, the 3D volumetric image is loaded into a software environment and visualized as a stack of parallel 2D images—so-called “image slices” or, more simply, “slices”—wherein a user employs a contouring tool and draws closed 2D contours in order to define a portion of the structure for segmentation. Structures may be delineated via, for example, a boundary (e.g., a contour) or an area. The drawing tool may be like a pencil, wherein the boundary of an area is drawn in, or alternatively, the drawing tool may be more like a brush, where an area is “painted” by the tool. Those 2D contours are finally combined to a 3D surface segment that describes the boundary of the region of interest.
By contouring in a slice-by-slice approach, a user of a contouring tool is able to define a 3D image structure incrementally. However, users typically do not draw a contour on each image slice when defining a 3D structure—rather, they draw a contour every few slices, leaving a gap in between. Then, an interpolation tool is used to interpolate one or more contours within the gaps. Interpolation reduces the number of slices with which a practitioner needs to interact, potentially improving the efficiency of using the contouring tool. After drawing one or more contours on, for example, every fourth slice of an image, the user can run an interpolation algorithm to fill the gaps (e.g., generate contours on the intervening planes) and review the result. In addition to interpolation, some contouring tools provide the functionality of contour extrapolation. Extrapolation can generate approximate contours on neighboring slices outside of the range of image slices containing user-generated contours, e.g., after drawing on one slice the user can run an algorithm that extrapolates the contours to one or multiple empty neighboring slices.
However, in conventional image contouring environments, both interpolation and extrapolation functions are implemented as separate tools from the contouring tool. Therefore, a user of a conventional contouring tool is forced to repeatedly switch between contour drawing and interpolation or extrapolation modes, in order to generate and refine a 3D structure.