The features of the present invention may be used in the printing arts, and more particularly, in digital image processing and electrophotographic printing. In digital image processing, it is commonly known that various image processing operations may be applied to specific areas, windows, or pixels of an image. It is also known that the image processing operations to be applied to individual pixels of the image may be controlled or managed by a pixel location comparison scheme. In other words, comparing the coordinate location of each pixel with a series of window coordinate boundaries to determine within which window a pixel lies. Once the window is determined, the appropriate processing operation can be defined for the digital signal at that pixel location.
Previously, various approaches have been devised for the control of digital image processing and window management. These approaches will be discussed in more detail below.
U.S. Pat. No. 4,760,463 to Nonoyama et al. discloses an image scanner including an area designating section for designating a rectangular area on an original and a scanning mode designating section for designating an image scanning mode within and outside the rectangular area designated by the area designating section. Rectangular areas are defined by designating the coordinates of an upper left corner and a lower right corner. Subsequently, counters are used for each area boundary, to determine when the pixel being processed is within a specific area.
U.S. Pat. No. 4,780,709 to Randall discloses a display processor, suitable for the display of multiple windows, in which a screen may be divided into a plurality of horizontal strips which may be a single pixel in height. Each horizontal strip is divided into one or more rectangular tiles. The tiles and strips are combined to form the viewing windows. Since the tiles may be a single pixel in width, the viewing window may be arbitrarily shaped. The individual strips are defined by a linked list of descriptors in memory, and the descriptors are updated only when the the viewing windows on the display are changed. During generation of the display, the display processor reads the descriptors and fetches and displays the data in each tile without the need to store it intermediately in bit map form.
U.S. Pat. No. 4,887,163 to Maeshima discloses an image processing apparatus having a digitizing unit capable of designating desired areas in an original image and effecting the desired image editing process inside and outside the designated areas A desired rectangular area is defined by designating two points on the diagonal corners of the desired rectangular area. During scanning, a pair of editing memories are used interchangeably to enable, first, the editing of thresholded video data from a CCD, and secondly, the writing of editing information for use with subsequent video data. The editing memories comprise a memory location, one byte, for each CCD element, the location holding image editing data which determines the editing process to be applied to the signal generated by the respective CCD element.
U.S. Pat. No. 4,897,803 to Calarco et al., the entire teachings of which are hereby incorporated by reference, discloses a method and apparatus for processing image data having a token associated with each data element, thereby identifying the element's location in an image. During processing of the image data, the token for each data element is passed through address detection logic to determine if the token identifies the application of an image processing operation.
U.S. Pat. No. 4,951,231 to Dickinson et al. discloses an image display system in which image data is stored as a series of raster scan signals in a data processor system. The position and size of selected portions of an image to be displayed on a display screen can be transformed in response to input signals received from a controlled input device. The display device includes a control program memory which stores control programs for a plurality of transform operations, such as rotation, scaling, or extraction.
U.S. patent application, Ser. No. 07/809,807, to Williams et al. discloses a system which improves upon the above-described systems by reducing the amount of non-data information needed to identify the image processing operation that is to be applied to each data element. An apparatus manages the processing of an array of digital signals representing an original image in order to produce an array of modified digital signals. The image processing apparatus is able to operate on non-overlapping rectangular regions or tiles defined with respect to the input signal array, and to thereby identify image processing effects to be applied to the signals lying within the tiles. In response to the identified image processing effects defined for each signal, image processing hardware within the system is selectively enabled to process the signals. The apparatus uses an effect pointer for each of a plurality of non-overlapping tile regions within the image data to selectively enable the image processing operations associated with those effects for signals within the regions.
A brief description of this conventional system will be given below with respect to FIGS. 1-4. The entire contents of U.S. patent application, Ser. No. 07/809,807, are hereby incorporated by reference.
FIG. 1 schematically depicts the various components of a digital image processing hardware module that might be used in an electroreprographic system for the processing and alteration of video signals prior to output on a xerographic printing device. Image processing module 20 generally receives offset and gain corrected video signals on input lines 22. The video input data may be derived from a number of sources, including a raster input scanner, a graphics workstation, or electronic memory or similar storage elements. Subsequently, module 20 processes the input video data according to control signals from microprocessor 24 to produce the output video signals on line 26. As illustrated, module 20 may include an optional segmentation block 30 which has an associated line buffer or line buffers, two-dimensional filter 34, and an optional one-dimensional effects block 36. Also included in module 20 is scanline buffer memory 38 comprising a plurality of individual scanline buffers for storing the context of incoming scanlines.
Segmentation block 30, in conjunction with its associated scanline buffer, parses the incoming video data to automatically determine those areas of the image which are representative of a halftone input region. Output from the segmentation block (Video Class) is used to implement subsequent image processing effects in accordance with the type or class of video signals identified by the segmentation block. Otherwise, a remaining portion of the input video image may be processed with either an edge enhancement filter to improve fine line and character reproduction when thresholded. It is noted that the segmentation block 30 may also parse the video data into other classes like continuous tone regions, color text regions, error diffusion regions, etc.
A two-dimensional (2D) filter block 34 processes the incoming corrected video in accordance with a set of predefined image processing operations, as controlled by a window effects selection and video classification. As illustrated by line buffer memory 38, a plurality of incoming video data may be used to establish the context upon which the two-dimensional filter(s) and subsequent image processing hardware elements are to operate.
Subsequent to two-dimensional filtering, the optional one-dimensional (1D) effects block is used to alter the filtered or unfiltered video data in accordance with a selected set of one-dimensional or two-dimensional video effects. As in the two-dimensional filter, the one-dimensional effects block also includes a bypass channel, where no additional effects would be applied to the video, thereby enabling the 8-bit filtered video to be passed through as output video. It is also noted that two-dimensional effect blocks may be used in conjunction with the one-dimensional effect block.
Selection of the various combinations of "effects" and filter treatments to be applied to the video stream is performed by microprocessor 24. Through the establishment of window tiles, independent regions of the incoming video stream, portions selectable on a pixel by pixel basis, are processed in accordance with predefined image processing parameters or effects. The activation of the specific effects is accomplished by selectively programming the features prior to or during the processing of the video stream. The data for each pixel of image information, as generated by the tiling apparatus and video classification, has an associated single bit or field identifier to control the image processing operations performed thereon.
Referring now to FIG. 2, which depicts an example array of image signals 50 having overlapping windows 52 and 54; the windows are used to designate different image processing operations which are effects to be applied to the image signals in the array. In general, windows 52 and 54 serve to divide the array into four distinct regions, A-D. Region A includes all image signals outside of the window regions. Region B encompasses those image signals which fall within window 52 and outside of window 54. Similarly, region D includes all image signals within window 54 lying outside of window 52, while, region C includes only those image signals which lie within the boundaries of both windows 52 and 54, the region generally referred to as the area of "overlap" between the windows.
In FIG. 3, image array 50 of FIG. 2 has been further divided into a plurality of independent, non-overlapping tiles, the tiles are generally defined by transitions from the different regions identified in FIG. 2. For instance, tile 1 is the region extending completely along the top of array 50. Tile 2 is a portion of the region that is present between the left edge of the image array and the left edge of window 52. Continuing in this fashion, region A of FIG. 2 is determined to be comprised of tiles 1, 2, 4, 5, 9, 10, 12, and 13. Similarly, region B is comprised of tiles 3 and 6.
The resolution of the tile boundaries is a single pixel in the fast-scan direction, and a single scanline in the slow-scan direction. The high resolution of the boundaries enables the processing of windows or regions having complex shapes. The image processing operations specified for each of the tiles which comprise a window or region are controlled by a window control block present within the two-dimensional block 34 of FIG. 1. It is noted that the window control block may be separate from the the two-dimensional block 34.
With respect to FIG. 4, window control block 80 is used to control operation of two-dimensional filter control block 82, as well as to send a window effects signal to the subsequent one-dimensional block, block 36 of FIG. 1, via output line 84. In operation, the two-dimensional filter, consisting of blocks 88a, 88b, 90, 92, and 94, receives image signals (SL0-SL4) from scanline buffer 38 and processes the signals in accordance with control signals generated by filter control block 82. More specifically, slow scan filter blocks 88a and 88b continuously produce the slow-scan filtered output context, which is selected by MUX 90 on a pixel-by-pixel basis for subsequent processing at fast-scan filter 92. Fast-scan filter 92 then processes the slow-scan context to produce a two-dimensional filtered output which is passed to MUX 94. MUX 94, controlled by filter control block 82, is the "switch" which selects between the filtered output(s) and the filter bypass, in accordance with the selector signal from filter control 82, thereby determining which video signals are to be placed on VIDEO OUT line 96. Two-dimensional convolution can also be used as a filtering technique as discussed in U.S. patent application Ser. No. 07/809,897 to Clingerman et al. The entire contents of this patent application (Ser. No. 07/809,897) are hereby incorporated by reference.
The bit positions for the window effects are significant in the conventional system. For example, a first bit position DO may determine whether the dynamic range adjustment will be carried out on all image signals lying within a tile. Similarly, a second bit position D1 may control the application of a tonal-reproduction-curve (TRC) adjustment operation. Third and fourth bit positions D2 and D3 may be determinative of the masking operation to be employed on the video signal. A fifth bit position D4 may control the application of a moire reduction process to the video signals to eliminate aliasing caused by scanning of an original document with periodic structures (e.g., halftone patterns). Thus, in this conventional system, the controlling of the image processing operations is strictly dependent on the binary (logic) value residing in a single bit or field.
Although Williams et al. (Ser. No. 07/809,807) discloses a system which reduces the memory requirements, the system can add to the complexity of the system's architecture which hinders expansion and modifications. More specifically, by utilizing single bits of fields within the data word to designate a specific image processing operation, the data word must be expanded each time a new image process is added to the system, or the data word must be reformatted each time an image process is replaced with another image process. This control configuration reduces the ability of the system's architecture to be easily adaptable to change to the image processing scheme.
As discussed above with respect to conventional systems, single effect bits or effect fields, attached to each pixel, are used to control which image processing operators were to be applied. In contrast, the present invention uses an entire effect data word which is attached to each pixel. The use of a data word enables the reduction in the numbers of bits used for control purposes which results in a lower cost and hardware pin count, the establishment of a common interface between all image processing modules, and the capability to easily expand as future imaging operators are added.