Microprocessor-based computer systems are becoming increasingly capable of image-processing tasks which thus far have been the province of mainframe computers and special purpose hardware. Nonetheless, the large memory and computational requirements of image processing place a premium on efficient utilization of computer memory and effective organization of computations.
Many image processing tasks require several stages of processing, each of which generates an image as its data product, which is then used as the input data of the next stage of image processing. For example, consider the processing of an image to produce an edge map, as is frequently required in industrial quality control applications or in machine vision. First an image is acquired using an electronic camera and stored in random-access memory of the computer. This image then is convolved with an edge-enhancement mask (such as a 3.times.3 Laplacian mask, for example) to yield an edge-enhanced image. Next, the pixel intensities of this image are tabulated to produce a histogram, which in turn is analyzed to select one or more threshold intensity values. The individual intensity values of the edge-enhanced image are then compared to the selected threshold intensity values so as to yield an edge map which is usually a binary image (where all of the individual intensity values are compared to a single selected threshold intensity value and the image consists of two levels of intensity only, i.e., one level representing the individual intensity values above the selected threshold and the other level representing individual intensity values below the selected threshold). Owing to noise in the original image, irregularities in illumination, or other problems, the edges in the edge map may not be geometrically continuous and one or more cycles of dilation and erosion of the edge map may be required to yield a satisfactory edge map suitable for subsequent analysis.
Several observations may be made with respect to this example: First, during the processing at least four separate images are generated, each image depending upon earlier stages of processing. Secondly, in general, the entire process, and in particular, the computation of the edge-enhanced image by convolution of an edge-enhancement mask with the original image, is a time-consuming computation operation. Thus, if an error is made in processing the image data during one of the later stages of processing, e.g. if the thresholds are not selected correctly in the computation of the edge map, it would be an unnecessarily wasteful use of computer time to reprocess the original image through all of the multiple stages. Thirdly, information useful in later stages of analysis, such as the pixel intensity histogram, should be permanently associated with the image so as to be available for later analyses and without having to be recomputed.
In common practice when processing images through a series of stages, such as the process described above, the available memory is arranged in a series of buffers, each of which may contain an image or a portion of an image. These buffers may be randomly selected and bear no particular arrangement to each other. It is therefore incumbent upon the user to impose an order on this memory which is appropriate to the task at hand. This organization is not automatic in the current state of the art. Further, any auxiliary information describing or documenting the image must be stored elsewhere and connecting that information with the corresponding images themselves is again a separate task imposed on the user.
In addition, when storing the image in a file, the general practice is to retain only minimal information about the image, typically its size and name.