Display systems such as televisions display full-motion video images as a series of still frames. Each frame of the image is comprised of a two-dimensional array of picture elements, known as pixels, arranged in orthogonal rows and columns. The image information is transmitted in a raster-scan format, one line at a time from top to bottom. Within each line the pixel information is transmitted from left to fight. Standard television systems in the United States have 480 rows with a resolution of approximately 572 pixels in each row. Video Graphic Adapter (VGA) standards specify an image comprised of 480 rows of 640 pixels. Wide-NTSC television standards specify an image 853 pixels wide and 480 rows high. While there is no universally accepted standard for high-definition television, some formats display up to 1152 rows with 2048 columns.
Because standard television broadcasts transmit an analog signal, each pixel in the row is not sent as a discrete unit, but rather the entire row is transmitted as an analog signal from left to right. Analog display devices such as cathode ray tubes (CRT) accept raster-scan image data and project it onto the display screen in real-time, one line at a time as it is received. However, many digital display systems require the data to be displayed an entire screen at a time. This requires the data to be stored as it is received until all the data for a given screen is ready to be displayed. An efficient data storage means is one aspect of this invention.
Thus far, the description of the image data transfer has referred to the image data for a single pixel as an indivisible unit. However, digital display systems require image data that is comprised of one or more weighted binary bits. Digital systems that have more than one bit of image data typically transmit all bits for a given pixel in parallel. For example, a three color display having eight bits of data for each color typically transmits one 24-bit-wide word for each pixel. If the digital display pixel can only display one bit at a time, each of the 24 bits must be displayed sequentially in order to create the desired intensity level and color for the pixel. One method of generating the required "gray scale" is pulse width modulation as described in commonly assigned U.S. Pat. No. 5,278,562 entitled "DMD Architecture and Timing for use in a Pulse-Width Modulated Display System."
Using the method taught in the above referenced patent and assuming an 8-bit monochrome system, the 8-bit image is displayed as a series of 8 1-bit images or "bit planes." Each bit plane is displayed for a period of time that is directly related to the significance of the bit. For example, a bit plane comprised of the most significant bit from each of the data words representing a pixel is extracted and displayed for a period of time. A second bit plane, comprised of the next most significant bit from each data word is then extracted and displayed for a period of time half as long as the first period. This process continues until each bit in the data word has been displayed.
The image data is received as a serial stream of pixels, with each pixel comprised of parallel data bits, and displayed as a serial stream of bit planes, with each bit plane comprised of one data bit for each pixel. An efficient means of translating the image data from the bitparallel, pixel-serial format to the pixel-parallel, bit-serial format is desired and is one aspect of the present invention.