This invention relates in general to image forming devices and, more particularly, to page printer memory management.
Maximizing print speed is an important goal for any printer. Meeting this goal can be difficult in laser page printers because of resource contention, resource limitation, resource fragmentation, and lack of advance knowledge of job and page content.
In printers that employ laser engines as the xe2x80x9cprint mechanismxe2x80x9d, data must be provided at a speed that is fast enough to keep up with the print action (which can be measured by the rate of movement of the paper past the imaging drum). In such printers, formatting is either performed on the host computer, with large volumes of rasterized image data shipped to the printer at high speed, or on a formatter within the printer itself. Since a conventional laser printer engine operates at a constant speed, if rasterized image data is not available when a previous segment of image data has been imprinted, a xe2x80x9cprint overrunxe2x80x9d or xe2x80x9cpuntxe2x80x9d occurs and the page is not printable. In essence, in order to avoid a punt, the Image Processor that rasterizes the image data xe2x80x9cracesxe2x80x9d the Output Video Task that images the data onto the imaging drum. This is commonly termed xe2x80x9cracing the laserxe2x80x9d.
Several methods have been used in the art to avoid print overruns. First, a full raster bit map of an entire page may be stored in the printer so that the print mechanism always has rasterized data awaiting printing. However, this solution requires large amounts of random access memory (RAM) for each page. A second method for assuring the availability of print data to a laser printer is to construct a display list from the commands describing a page. During formatting, a page description received from a host is converted into a series of simple commands, called display commands, that describe what must be printed. The display commands are parsed and sorted according to their vertical position on the page. The page is then logically divided into sections called bands (or page strips). The sum of the display commands for each band is referred to as the display list for that band. The bands are then individually rendered (i.e., the described objects in the display list for each band are rendered) into a raster bit map and passed to the print engine for printing. This procedure enables lesser amounts of RAM to be used for the print image.
When the display commands are rendered at a fast enough pace, the same memory used to store a first band can be reused for a subsequent band further down the page. For example, in certain prior art printers it is known to employ three raster buffers for storing bands. During page processing, the first buffer is reused for a fourth band on the page, the second is reused for a fifth band, etc. However, under standard (generally maximum) page-per-minute performance, little time is left between finishing printing of a band and when a next band is required to be rasterized from the same print buffer.
Under certain circumstances, xe2x80x9ccomplexxe2x80x9d bands will include many display commands and require a longer than normal time for rasterization. Additionally, to rasterize a band (whether xe2x80x9ccomplexxe2x80x9d or not), more memory space may be required than is currently availablexe2x80x94depending upon several factors associated with the printer, including memory size, memory fragmentation, job to be printed, and other printer system activities. In the case of a complex band, rasterization time may increase to such an extent that the succeeding band can not be delivered on time, thus causing a print overrun to occur. Accordingly, pre-rasterization is commonly performed on a complex band to ensure that the video imaging race with the laser will not cause a print overrun.
Racing the laser requires making a determination regarding how to get the best trade off between printer memory and real time processing requirements. In a properly working printer, a print overrun is avoided because the Image Processor task just manages to win every race with the direct memory access (DMA) video output task. It is undesirable to avoid print overruns by unilaterally pre-rasterizing every video band because (even with compression) that consumes too much precious printer memory for video DMA buffers. As such, one process has been developed to permit minimization of the number of pre-rasterized video buffers and is disclosed in U.S. Pat. No. 5,129,049 to Cuzzo et al., the disclosure of which is incorporated in full herein by reference. This was extended for compression and empirical Image Processor cost measurements in U.S. Pat. No. 5,479,587 to Campbell et al., also incorporated in full herein by reference.
In Campbell et al., in the event of low available memory (i.e., a memory fault) for processing print commands, each band of a page may be reevaluated and passed through several steps in attempt to reduce memory allocation requirements and free up more memory. For example, each band may be rasterized and compressed using one of several compression techniques. After a band is rasterized and compressed, the memory allocation requirement for that band is determined. If the memory allocation requirement is less than the memory allocation requirement of the display list for that same band (relative to a comparison threshold), then the rasterized and compressed version will be used and stored in memory rather than the display list. The rasterized and compressed band is stored in memory by being dissected into fragments (segments) and then linked and distributed into xe2x80x9cholesxe2x80x9d in the memory. The xe2x80x9cholesxe2x80x9d are, typically, smaller isolated free areas of memory surrounded by larger unavailable (used) areas. On the other hand, if the rasterized and compressed band""s memory allocation requirement is not less than the memory allocation requirement for its display list (per the threshold), then the band may be processed again using a different compression technique. These steps of rasterizing a band, compressing it, comparing the size of the compressed version to the display list, and determining if the memory allocation requirement of the compressed version is less than that of the display list, may be repeated multiple times using differing compression techniques and/or thresholds until the band""s allocation requirement is less than that of its display list.
Once all of the bands have been rasterized, compressed, evaluated and distributed (when the threshold was met) then processing of the print commands resumes at the point where the event of low available memory was previously detected (i.e., the point that initiated the reevaluation process for the page). The band that was previously attempting a memory allocation (but detected the low available memory event) should now have a better chance of being able to satisfy its memory allocation.
Distinguishing now from Campbell et al., U.S. Pat. No. 5,483,622 (Zimmerman et al.) discloses a Page Printer Having Automatic Font Compression and is also incorporated herein by reference in full. In Zimmerman et al., in the event of low available memory for processing print commands, alternative steps occur to alleviate the low memory error including: (i) compressing raster graphic images, and (ii) if no raster graphic images are present or if compression of the raster graphic images does not remove the low memory error, then compressing font characters. Additionally, a large size font whose size exceeds a threshold may automatically be compressed, regardless of a memory low/out signal being present.
Although these cited memory processing techniques often enable a memory allocation request to be satisfied, fragmentation of the memory may not be reduced. For example, fragmentation may not be reduced while composing the current page because each band on the page is processed independently of all other bands. Namely, if a first band is rasterized, distributed and stored, and then some memory surrounding a distributed segment of that first band is subsequently deallocated, then the first band ends up actually causing fragmentation in the memory since it remains there even after its surrounding areas were deallocated. This scenario may occur, for example, if a segment of the first band was stored in a hole that was created by a second band""s display list, and then the second band""s display list was removed from around the first band in order to render the second band""s rasterized and compressed band. Disadvantageously, if the memory becomes too fragmented (i.e., too many xe2x80x9cholesxe2x80x9d exist throughout the memory address space) such that other memory allocation requests cannot be satisfied that require contiguous allocations of memory, then overall page processing is crippled and a memory out error may result. U.S. Pat. No. 6,130,759 further describes the dissecting of bands into holes in memory and further describes a method of reducing fragmentation and is incorporated herein by reference in full.
Finally, it should be noted here that none of these image processing systems teaches a xe2x80x9cpage pipexe2x80x9d in connection with the memory management schemes employed. A page pipe is the sum of all the composed pages in a memory that are waiting to be processed by the print engine (i.e., waiting to be video imaged). Typically, these prior systems embody barely sufficient amounts of memory to operate and, thus, are unable to sustain a true page pipe. Specifically, such printers employ sufficient memory for composing a current page just fast enough to race the laser and, generally, for simultaneously video imaging an already composed page. But, they do not have sufficient memory and memory management resources for holding and managing pages that are already composed and just waiting to be video imaged. In other words, no page pipe is taught by these systems. Thus, conventionally, anytime a memory resource contention occurs (such as a memory allocation fault) for the current page being composed, the prior systems simply wait for the single, already composed page that is being video imaged to complete its imaging and release its memory resources.
However, more recent larger and faster printing systems do employ page pipes to keep the printers operating at full speed. Such page pipes may vary in size to hold one or more pages that are already composed and are waiting to be video imaged. In these page pipe systems, the probability of memory fragmentation and memory resource contention is increased over non page pipe systems. Consequently, if the processing of print commands (i.e., composing the current page) cannot be satisfied, in some cases the printing process has been known to undesirably xe2x80x9cpausexe2x80x9d when there are multiple pages in the pipe. When a pause occurs, it has been recognized that the printer (the composing task) is typically waiting for the allocation in memory of a band that is required for punt (print overrun) avoidance. The allocation may not occur for a number of reasons, including memory fragmentation, resource contention, and low free memory availability. Often, however, after one or more of the pages that are in the page pipe are printed (video imaged or xe2x80x9cflushedxe2x80x9d from the pipe), sufficient memory becomes available so that the band allocation may generally be satisfied and, thus, conclude the xe2x80x9cpausexe2x80x9d. Notably, during any given xe2x80x9cpausexe2x80x9d, potential processing bandwidth is lost because, conventionally, further page composition tasks are suspended until sufficient resources become available.
This xe2x80x9cpausingxe2x80x9d (or holding of further page composition tasks) that occurs during a multi-page print job is not only undesirably but also frustrating to a user that expects a certain page-per-minute output as described by the page printer""s specifications.
Accordingly, an object of the present invention is to improve consistency of page throughput in a printer through improved memory management techniques.
According to principles of the present invention in a preferred embodiment, an imaging device and method provide memory management tasks on data of a page being composed in response to feedback generated by memory management tasks that are performed on pages in the page pipe that are waiting to be imaged. The feedback is throttled relative to memory management tasks that occur in the pipe. Advantageously, the memory management tasks are performed on the page being composed without waiting for an already composed page to finish imaging. Accordingly, page throughput of a multi-page print job is improved.
In preferred embodiments, the memory management tasks include compressing, relocating, and/or compressing and relocating data on the page being composed. Page data on which the memory management tasks are performed include raster patches, fonts, patterns, video bands, monster bands and vector bands. Atomic operations and/or a critical section locking mechanism provide collision avoidance as between the memory management tasks performed on data in the pipe and a video imaging task.
Other objects, advantages, and capabilities of the present invention will become more apparent as the description proceeds.