1. Field of the Invention
The present invention relates to a frame memory device which receives raster-scanned digital image signals representing a frame of image, stores the image signals in a memory, and reads out the image signals from the memory while subsampling, so as to output raster-scanned image signals at lower resolution than the original image received.
2. Related Background Art
In recent years, the need for handling digital images has been increasing. This is largely attributable to the fact that digitalization has grown in such applications as image data base, DTP, photographic printing, and image transmission, and also to the fact that hardware for handling digital images has been and is being developed. More particularly, image input devices such as scanners, image output devices such as full-color printers and high resolution color monitors, mass storage devices such as optical discs, and personal computers are becoming highly efficient, and easily available at reduced cost. For instance, image input devices such as digital cameras have become available, that include two-dimensional CCD area sensors and that are able to capture motion pictures in real time. The need for handling digital images is expected to increase dramatically in the future, not only in business applications but also in individual or personal fields such as hobbies.
Frame memory devices are often used in handling digital images. For example, where one frame of television signals is to be recorded as a digital still picture in a low-speed memory device, such as an optical disc, digitalized television signals are first stored in a high-speed frame memory device, and the image signals are read out at a low speed from the frame memory device and recorded in the optical disc, as known in the art. Where a recorded still picture is to be displayed on a television monitor, the recorded image signals are read out at a low speed from the optical disc and stored in the high-speed frame memory device, and then are repeatedly read out from the frame memory device at a television signal rate so that the still picture is displayed on the television monitor. The high-speed frame memory device serves as a buffer which, in the above example, provides adjustment between high-speed input and output of television signals, and low-speed input and output of television signals, to and from optical discs.
Frame memory devices are often used in digital still cameras. Signals produced by a two-dimensional CCD area sensor must be read out at a relatively high speed so as not to be degraded. While the output signals of this CCD sensor are initially digitalized to provide high-speed digital image signals, it is difficult to record these high-speed digital image signals without compressing them, and it is difficult to perform real time complicated image compression processing of these high-speed digital image signals, in accordance with a JPEG system, for example. Even if high-speed operations are feasible for recording or compression processing of the signals, it is still advantageous to operate relevant circuits at a low speed so as to ensure timing margins and reduce the current consumed. To address these problems, frame memory devices are used in digital still cameras. More particularly, high-speed digital signals are first stored in a frame memory device, and these signals are then read out at a relatively low speed so as to facilitate recording or image compression processing. Furthermore, where image compression is performed using two paths, so as to control a compressed data volume (code length) to b equal to or smaller than a prescribed value, the original image signals must be compressed twice, which necessitates the use of a frame memory device.
Frame memory devices are used not only for recording image signals during photographing but also for reproducing a photographed still picture and displaying it on a television monitor. Image signals recorded in a main memory medium of the camera are read out and stored in a frame memory device, after being expanded or decompressed in the case of compressed images, or without such processing in the case of non-compressed images, so that television signals of a still picture are generated by repeatedly reading out these stored signals. If a camera does not have a frame memory device, image signals must be read out from the main memory medium and output at a television signal rate after being expanded in real time in the case of compressed images or without being expanded in the case of non-compressed images. This is not an easy matter. Moreover, this reproduction process must be wastefully repeated so as to continuously display the reproduced image on the television monitor. However, if the camera is equipped with a frame memory device, the reproduction process needs to be effected only once, and thereafter the reproduced image signals are merely repeatedly read out from the frame memory device.
FIG. 8 is a block diagram showing one example of a digital still camera equipped with a frame memory device. The operation of this camera will now be briefly explained.
A light beam received through a photographing lens 10 from an object forms an image on CCDs 11. During photographing, image light is photoelectrically converted into electric analog image signals, which are then output from the CCDs 11 and converted into digital signals by means of an A/D converter 12. The digital signals are subjected to various processings, such as color separation, pixel interpolation, γ compensation, white balance adjustment, contour compensation, and color conversion, that are performed by a signal processing circuit (not shown), so as to provide raster-scanned signals in a predetermined format, which are then sequentially stored in a frame memory 1. Once the signals corresponding to one picture frame are stored, these signals are sequentially read out and transmitted to a compression/expansion circuit 13, and subjected to image compression processing when the camera is in a compression recording mode, 80 that compressed image signals are recorded in a recording medium in the form of a memory card 15, through an I/F (interface) circuit 14. When the camera is in a non-compressed recording mode, the above compression processing is not executed by the compression/expansion circuit 13, and the input signals are output to the I/F circuit 14 as they are, and recorded in the memory card 15.
During reproduction, image signals recorded in the memory card 15 are read out, and transmitted to the compression/expansion circuit 13 through the I/F circuit 14. The compressed image is expanded and decoded by the circuit 13, and the thus expanded image signals are sequentially stored in the frame memory 1. A non-compressed image is not subjected to expansion processing in the compression/expansion circuit 13, and the transmitted signals are stored as they are in the frame memory 1. The image signals stored in the frame memory 1 are read out in a raster scanning scheme at a television signal rate, and converted into analog image signals by a D/A converter 16. Thereafter, the analog image signals are processed by a signal processing circuit (not shown) into television signals, such as NTSC or PAL, which are then output from a video output terminal 17 to a television monitor 18 so that the reproduced image is displayed on the monitor.
A control circuit 9 is adapted to control operations of the camera as a whole, and respective blocks inside the camera, and includes a CPU for controlling operation sequences, a ROM that stores programs to perform the sequences, clock generators for generating clocks needed for the respective blocks inside the camera, and a signal generating circuit for supplying control signals. A display device 19 provides display of various information, such as the number of frames that have been photographed, the number of remaining frames, the frame number of the reproduced image, shutter time, and aperture, as well as various kinds of error and alarm messages. A mode selector switch 21 effects switching between a photographing mode and a reproduction mode, and switching between a compress d recording mode and a non-compressed recording mode when a picture is taken. A release switch 22 directs the camera to initiate photographing, and a frame-advance switch 20 selects an image that is to be reproduced.
FIG. 9 shows details of a frame memory portion of the camera of FIG. 8, that includes only the frame memory 1 and a portion of the control circuit 9 assigned to control the frame memory 1. Frame memory controller 2 is adapted to control writing, read-out and refreshing of the frame memory 1. Frame memory 1 and the controller 2 constitute a frame memory device. In FIG. 8, the frame memory controller 2 is incorporated in the control circuit 9.
Next, a brief explanation will be provided with respect to the block diagram of FIG. 9.
The frame memory controller 2 has four kinds of buses, and receives and generates image signals through these buses in various operation modes. The first bus is a video bus through which the input and output of raster-scanned image signals take place. The frame memory controller 2 receives the image signals through this bus when image data are sent from the A/D converter 12 during photographing, and outputs the image signals to the D/A converter 16 during reproduction of an image. Although separate buses for input and output are illustrated in FIG. 8 for the sake of clarity, these buses may be integrated into a single two-way video bus as shown in FIG. 9 (and this is the case in general). The raster-scanned signals input and output through the video bus are accompanied by horizontal synchronizing signals, vertical synchronizing signals and clocks for transferring the image signals. In the following description, the input and output of the raster-scanned image signals through the video bus will be called “video input/output”. Writing means 3 shown in FIG. 9 is a means for writing the video-input image signals into the frame memory 1. Read-out means 8 is a means for reading out the image signals stored in the frame memory 1 for video-output of the signals.
The second bus is a memory bus to which the frame memory 1 is connected. The image signals received through any of the other three buses are eventually written into the frame memory 1 through this bus. The image signals stored in the frame memory 1 are read out through this bus and output through any of the other three buses. The image signals transmitted through the memory bus are accompanied by control signals for writing and read-out of the frame memory 1 and address signals.
The third bus is a compression/expansion bus. Image signals are transmitted to the compression/expansion circuit 13 through this bus when they are compressed, and received from the compression/expansion circuit 13 through this bus when they are expanded and decoded. Where the compression/expansion circuit 13 is a JPEG circuit, the image signals are input and output in a block unit of 64 pixels (8 horizontal×8 vertical). For many commercially available JPEG LSIs, this bus allows the input and output of data in synchronization with clock pulses. The manner of data input and output-depends on what kind of circuit is used as the compression/decompression circuit 13. Accordingly, the compression/decompression bus of the frame memory controller 2 must be compatible with the compression/decompression circuit 13.
The fourth bus is a host bus through which the host CPU sends commands and parameters to the controller 2 and receives status information therefrom so as to actuate or start the frame memory controller 2 and switch the operating modes. Further, in some cases, the host CPU may be able to write or read out image signals into or from the frame memory 1 through this bus.
FIG. 10 generally illustrates addresses of the frame memory 1 in which the image signals entered through the video bus will be stored. The frame memory 1 generally has a two-dimensional address structure having vertical addresses and horizontal addresses so as to store raster-scanned image signals. The vertical addresses correspond to scan lines of the image signals, and the horizontal addresses correspond to signals that belong to each scan line. This two-dimensional address structure is not necessarily a physical two-dimensional address structure, but may be a logical one, as will be explained later.
In FIG. 10, the upper portion of the figure represents low-order or subordinate vertical addresses and the lower portion represents high-order or superordinate vertical addresses, while the left portion of the figure represents low-order or subordinate horizontal addresses and the right portion represents high-order or superordinate horizontal addresses. The downward and rightward arrows shown outside the rectangles in FIG. 10 indicate directions (increasing directions) toward superordinate vertical and horizontal addresses, respectively. The raster-scanned image signals are stored such that scan lines entered earlier are stored in the low-order or subordinate vertical addresses, and scan lines entered later are stored in the high-order or superordinate vertical addresses, and such that signals that belong to each scan line and entered earlier are stored in the low-order horizontal addresses, and signals entered later are stored in the high-order horizontal addresses. Since the raster-scanned image signals are usually obtained by scanning an image from the upper side to the lower side of an image plane, and also from the left side to the right side of the image plane, the vertical position of the address shown in FIG. 10 coincides with the vertical position of the image signal in the actual image plane.
The raster-scanned image signals are stored at respective addresses of the frame memory in the orders indicated by arrows inside the rectangles of FIG. 10. More specifically, the signals belonging to one scan line are given a fixed vertical address, and written into the memory in the order of entry, with the horizontal address being incremented by one after each stored entry. When the scan line proceeds to the next one, the vertical address is incremented by one, and signals belonging to the next scan line are stored in an area of the new vertical address in the same manner as in the previous scan line. Storage of the signals equivalent to one frame is accomplished by repeating this procedure from the first scan line to the last scan line. Where the image signals are entered in an interlace scan mode, the vertical address is incremented by two rather than by one, so that signals in odd-numbered fields and signals in even-numbered fields are respectively stored. FIG. 10 shows the manner in which the image signals are stored in the non-interlace scan mode, and the manner in which the image signals are stored in the interlace scan mode.
When the image signals stored in the frame memory 1 are generated again as raster-scanned signals from the video bus, those signals stored earlier can be first read out according to the order of the arrows shown internally of the rectangles in FIG. 10. If the output signals are to be arranged in the interlace scan scheme, the signals are read out with the vertical address being incremented by two as in the case of storage of these signals.
The width of the video bus and memory bus, the number of lines required for transmitting control signals, the order of input and output of respective color signal components and other detailed specifications are varied depending upon the format of an image signal described later. As for the format of the image signal, a YCBCR image signal is often used to represent a color image in the case of a digital still camera, since the YCBCR image signal requires a reduced amount of data to be handled as compared with a RGB image signal. Although Y signals do not affect the data amount, CB and CR signals require a smaller band than the RGB image signal, resulting in a reduced number of pixels required (hereinafter CB and CR signals will be referred to as C signals). Therefore, even in the case where a limited number of pixels is provided, as in single disc color CCDs, the pixels can be effectively utilized, and, when image compression is performed, image data can be compressed with increased efficiency.
In a single disc color filter having a so-called Bayer arrangement as shown in FIG. 11, for example, a large number of pixels is assigned to G components which have the greatest influence on Y signals that govern the resolution, whereas smaller numbers of pixels are assigned to B and R components for producing C signals. This color filter is characterized in that the G components are arranged checker-wise, whereas the B and R components are placed in phased positions. Further, the B and R components are respectively disposed in alternate lines, that is, in every other line. Thus, the ratio of the number of pixels is G:B:R=2:1:1.
The YCBCR signal is generally obtained from the RGB signal (or vice versa) according to the following arithmetic expressions:Y=0.299R+0.587G+0.114B  {circle around (1)}CB=B−Y  {circle around (2)}CR=R−Y  {circle around (3)}
It should be noted that the respective color signals that appear in the right sides of the above expressions {circle around (1)}, {circle around (2)} and {circle around (3)} must be of values defined by coordinates in the same space. In the case of single disc color CCDs having the Bayer arrangement, however, each pixel can have only one kind of color signal. In order to obtain the YCBCR image signal for each pixel according to the above expressions, values of the two other kinds of color signals that are not present in the relevant pixel must be known. To this end, the values of the color signals that are lacking can be obtained by effecting pixel interpolation in an appropriate method. While the YCBCR image signal can be obtained by performing such signal processing, the following two kinds of signals are actually employed in many operations.
4:2:2 Signal
The number of pixels having Y signals is the same as that of a CCD as counted in the horizontal and vertical directions (in the case of the Bayer arrangement, the number of pixels of Y signals is twice that of G components).
The number of pixels having C signals is half the number of pixels of Y signals as counted in the horizontal direction, and is the same as the number of pixels of Y signals in the vertical direction (both CB signals and CR signals are present in all lines).
Sampling coordinates of the color components are shown in FIG. 12, where ∘ indicates a point where all of Y, CB and CR signals are present, and • indicates a point where only Y signal is present.
4:1:1 Signal
The number of pixels of Y signals is the same as the number of pixels of a CCD as counted in both the horizontal and vertical directions (in the case of the Bayer arrangement, the number of pixels of Y signals is twice that of pixels of G components).
The number of pixels of C signals is half the number of pixels of Y signals as counted in both the horizontal and vertical directions (both CB signals and CR signals are present in the same lines, and these lines are placed alternately).
Sampling coordinates of the color components are shown in FIG. 13. In FIG. 13, ∘ indicates a point where all of Y, CB and CR signals are present, and • indicates a point where only Y signal is present.
While the Y signal has a positive value, since R, G, B have positive values, the C signal may have a positive or negative value. The peak and bottom levels of the Y signal are equal to those of the R, G, B color components according to the expression {circle around (1)}. The peak and bottom levels of the C signal are positive and negative values, respectively, whose absolute values are equal to each other (the middle value is 0) , according to the expressions {circle around (1)}, {circle around (2)} and {circle around (3)}. The digital values of these signals are determined depending upon which numerical values are given to the reference levels (black level and peak white level of Y signal, 0 and peak or bottom level of C signal).
In the case of an 8-bit YCBCR image signal, two kinds of values as indicated below are often used.
Where the image signal is entered for use in a personal computer,    Y: 0-255    CB: 1-255 (128 corresponds to 0.)    CR: 1-255 (128 corresponds to 0.)
Where the image signal is used as a digital television signal,    Y: 16-235 (0 and 255 are regarded as synchronizing signals.)    CB: 16-242 (128 corresponds to 0.)    CR: 16-240 (128 corresponds to 0.)
There will next be described the manner in which raster-scanned YCBCR image signals each having 8 bits are received from and transmitted to the video bus. The video bus may consist of a Y bus for 8-bit Y signals, and a C bus similarly for 8-bit C signals (16 bits in total), and the Y signals and C signals (16-bit signals) are input and output in parallel with each other through the respective buses, as shown in FIG. 14. The CB signals and CR signals flowing through the C bus are multiplexed such that a CB signal appearing in one pixel is followed by a CR signal in the next pixel. Since the number of lines of Y signals is the same as that of C signals in the 4:2:2 signals, input and output of all of the lines are conducted with 16 bits. In the case of 4:1:1 signals, on the other hand, C signals are only present in every other line, and there are lines where the input and output of only Y signals (8-bit signals) take place. It is to be noted, however, that 4:1:1 signals are often produced by skipping every other line of 4:2:2 signals when the signals are stored in the memory, that is, 4:2:2 signals are received from the video bus, and some lines are skipped when the signals are written into the memory. Upon output of the signals, line interpolation is performed so as to generate the signals as 4:2:2 signals from the video bus. The signals on the video bus are assumed to be 4:2:2 signals in the following description.
When a signal sampled at sampling points as shown in FIG. 12 is input and output in a raster scan mode, it is appropriate to multiplex the CB signals and CR signals such that a CB signal in one pixel is followed by a CR signal in the next pixel, as shown in FIG. 14. The number appended to each signal, as seen in Y0, Y1, . . . , CB0, CR0 . . . , corresponds to the horizontal sampling coordinate shown in FIG. 12, and is also related to the order of input and output of the signal and its storage address in the frame memory. Since the signals shown in FIG. 14 do not belong to any specific scan line, the number that corresponds to the vertical sampling coordinate, the order of scan lines, and the vertical storage address in the frame memory are not appended to each signal.
When 16-bit signals are input and output through the video bus, a 16-bit memory bus is needed to enable real-time writing/reading of these signals into/from the memory. With the memory bus also divided into a Y bus and a C bus, Y signals and C signals are prevented from being mixed in the memory. The frame memory 1 may be assumed to consist of independent Y memory and C memory. Where a memory device having a 16-bit bus is used, for example, the Y memory and C memory are physically the same memory, but may be considered as logically separate memories since high-order or superordinate bytes and low-order or subordinate bytes are independent of each other. Where two memory devices each having an 8-bit bus are used, on the other hand, the Y memory and C memory are physically as well as logically separate memories. The difference between a one memory device with a 16-bit bus and two memory devices with 8-bit buses is that the address signals of the two memories are the same as or different from each other. The same address signals are used for high-order bytes and low-order bytes in the one memory device with the 16-bit bus, whereas different address signals are used in two physically separate memory d vices. In an ordinary frame memory device, however, the address signals of the Y memory and C memory are derived from a common source. Thus, there is no difference in the device as a whole between one memory device and two memory devices.
The use of the same address signals in the Y memory and C memory does not cause any operational inconvenience or problems, since it is only at the time of video input/output that signals must be simultaneously written into and read out from the Y memory and C memory. During video input/output, signals are merely entered into or generated from the Y memory and C memory in parallel with each other in the order shown in FIG. 14, and therefore the same addresses can be used in the Y memory and C memory without causing any problem or inconvenience. If writing/reading can be separately performed with respect to the individual color signals, the addresses of the Y memory and C memory need not be independent of each other. It is to be noted, however, that while data are written into one of the Y memory and C memory, no data should be written into the other memory, and thus write-enable signals must be separately set in these memories. In a memory device with a 16-bit bus, write-enable signals are usually separately present in the high-order bytes and low-order bytes, thus avoiding problems. On the other hand, common read-enable signals may be used in the Y memory and C memory, since either one of the Y signal and C signal that are simultaneously read out may be discarded if it is considered unnecessary. In the ordinary frame memory device wherein the address signals and read-enable signals of the Y memory and C memory are derived from common sources while the write-enable signals are derived from separate sources for these memories, the number of signal lines does not increase undesirably, and the size of the device can be accordingly reduced.
The compression/expansion bus and host bus may usually be 8-bit buses. While the image signals are processed in a block unit of 64 pixels (8 horizontal×8 vertical) in the JPEG compression/expansion mode as described above, the signals of one block must be composed of the same color signal components. Thus, an 8-bit compression/expansion bus may be used without causing a problem, to allow input and output of single color signal components during processing of at least one block. This fact also justifies the use of common address signals in the Y memory and C memory.
When a color image is processed in the JPEG compression/expansion mode, the processing may take place successively, plane by plane, for each of color signal components, or may be conducted while switching the color signal components each time a certain number of blocks are processed. The former case is called non-interleave, and the latter case is called block interleave. Where YCBCR 4:2:2 image signals are compressed according to the JPEG system, block interleave is often used in which a unit (called MCU) of four blocks, Y, Y, CB, CR, is repeatedly processed in this order.
As later described, it is advantageous if the frame memory device that permits input and output of YCBCR 4:2:2 image signals is generally constructed as shown in the block diagram of FIG. 15. First there will be a general discussion of a frame memory 1 that stores these image signals.
With the development of highly integrated memory devices in recent years, a large-capacity memory, such as a frame memory for storing images, can now be easily realized. However, it is not easy to construct a compact frame memory. Although SRAM, which operates at a high speed with one-dimensional addresses and need not be refreshed, can be easily used, its capacity is not as large as DRAM, and thus the frame memory is often constituted by DRAMs.
Since the frame memory is required to read/write high-speed signals as in the above-described video input/output, a high-speed page mode is usually used when DRAM is used as the memory device. If the high-speed page mode is used, those signals that belong to one scan line are stored in an area of the same ROW address. These signals are then sequentially stored in the order of entry from low-order COLUMN addresses toward high-order COLUMN addresses. When a scan line proceeds to the next one, the ROW address is updated. The update of the ROW address is usually conducted during a horizontal blanking period since an overhead time occurs upon update of the ROW address. Thus, two-dimensional addresses that correspond to raster scanning are used in the DRAM, wherein the ROW addresses correspond to vertical addresses and COLUMN addresses correspond to horizontal addresses.
While logical two-dimensional addresses can be considered in the case of SRAM, physical two-dimensional addresses are not present in SRAM. Since SRAM originally has one-dimensional addresses, it has an advantage that the number of horizontal addresses can be freely chosen even when two-dimensional addresses are considered. Accordingly, only a reduced waste of the memory occurs when an image whose number of effective horizontal pixels is not 2N (N is positive integer) is stored. Where DRAM is used in the high-speed page mode, on the other hand, the number of COLUMN addresses is uniquely determined due to the presence of physical two-dimensional addresses. When the number of effective horizontal pixels is smaller than the number of COLUMN addresses, therefore, a redundant portion of the COLUMN addresses becomes a wasted area that is not used. In addition, DRAM needs to be refreshed, and such refreshing is usually effected utilizing a horizontal blanking period in the case of video input/output. Where DRAM is used as the frame memory device in the above manner, a high-performance frame memory controller is needed for performing the necessary controls. A specific method for using DRAM to provide the frame memory will be described hereinafter.
As an example, storage of an image of 640 horizontal×480 vertical pixels (VGA) requires the DRAM to provide at least 480 ROW addresses and at least 640 COLUMN addresses. In many cases, however:Number of ROW addresses≧Number of COLUMN addresses.  {circle around (4)}Thus, the memory needs to be constructed in some fashion so as to reduce the above-described redundant portion of the COLUMN addresses and the number of DRAMs. For example, 4:2:2 YCBCR image signals can be stored in VGA size with high efficiency if two 4M-bit DRAMs each having 512 ROW×512 COLUMN addresses and a bus width of 16 bits are used. In this case, the number of COLUMN addresses is doubled, i.e., increased to 1024 by using two DRAMs, whereby the YCBCR image signal (4:2:2) of 640 horizontal×480 vertical pixels can be stored. COLUMN addresses 0-511 are present in the first DRAM, and COLUMN addresses 512-1023 are present in the second DRAM. In this case, COLUMN addresses 640-1023 and ROW addresses 480-511 become a wasted area that is not used.
There will next be explained how the 4:2:2 YCBCR image signals are stored in the frame memory 1. As described above, raster-scanned image signals entered through the video bus are generally stored at respective addresses in the order indicated by the arrows inside the rectangles of FIG. 10. It follows from this fact, as well as the manner of signal input from the video bus as shown in FIG. 14 and the structure of the frame memory as shown in FIG. 15, that the YCBCR 4:2:2 image signals are stored in the frame memory at addresses as shown in FIG. 16. Ymn represents a Y signal stored in an address of the Y memory 6 where ROW (vertical)=m and COLUMN (horizontal)=n. CB and CR signals are represented in a similar manner. In FIG. 16, m=0 corresponds to the first line (top line) and n=0 corresponds to the pixel in the first column (the leftmost column). In this case, the first address is 0 in both the horizontal and vertical directions, as in many frame memory devices, but it is not necessarily required to be 0. Since CB and CR signals are alternately entered through the C bus such that CB signal in one pixel is followed by CR signal in the next pixel, these signals are also alternately stored at successive COLUMN addresses in the C memory 7. In FIG. 16(b), CB signals are stored in even-numbered COLUMN addresses (in this case, 0 is the first COLUMN address), and CR signals are stored in odd-numbered COLUMN addresses. In this example, it will be understood that the image signals are entered from the C bus in the order ofCB→CR→CB→CR→ . . .   {circle around (5)}In this regard, attention should be given to sampling coordinates of CR signals. For example, CRm0 is stored in the ROW address of m and COLUMN address of 1, which are identical with the addresses of Ym1. This does not mean, however, that the sampling coordinates of CRm0 are the same as those of Ym1. Normally, the sampling coordinates of CRm0 are identical with the sampling coordinates of Ym0 and CBm0, as shown in FIG. 12.
Irrespective of the fact that the respective Y, CB, CR image signals are sampled on the same coordinates, CR signals are not stored in the same addresses as the other two signals, and their COLUMN addresses are shifted by 1 from those of the other signals. To multiplex the CB signals and CR signals that are sampled on the same coordinates in the order of {circle around (5)} indicated above, the CR signals can be delayed by one sampling block with respect to the CB signals.
Development of the Invention
Recently, two-dimensional CCD area sensors having a large number of pixels have appeared. For instance, a CCD sensor has been announced that has square pixels whose effective pixel number is 1280 horizontal×1024 vertical. This CCD sensor was originally developed for use as a sensor of an image input device for computers, and is advantageously used for displaying an image on a monitor of a personal computer, rather than a television monitor. The specifications of the CCD sensor that has square pixels whose effective pixel number is 1280×1024 are compatible with an image signal format for personal computers. However, there is a need to reproduce such an image signal and output it in a normal television signal format.
A high resolution digital still camera using the above CCD may be used for entering an image into a personal computer, and may be provided with a reproduction function to permit reproduced image signals to be displayed on a conventional home television monitor, so that the photographed image can be easily confirmed. Naturally, the format of the reproduction signal generated from the camera should be compatible with that of the television monitor to be used. NTSC and PAL are typical television signal formats, and many digital still cameras are able to output reproduction signals in these television signal formats. Some of these cameras are adapted to switch between NTSC and PAL signals and output the selected signal.
There will next be considered a case where a still picture having the effective pixel number of 1280×1024 (square pixels) is to be reproduced and generated as a television signal in NTSC format. Image signals representing the still picture are stored in the frame memory device of the digital still camera provided with a video reproduction output. These signals are to be read out from the frame memory device at a rate of NTSC television signals, to provide reproduced image signals in the NTSC format. However, the effective pixel number of NTSC television signals is far smaller than 1280×1024, though it depends on the selection of sampling frequencies. In the case of square pixels, for example, the number of the pixels is about 640 in the horizontal direction×486 in the vertical direction, which is less than one fourth of an image of 1280×1024 pixels. The sampling frequency may be about 12.2727 MHz. It is apparent that an image having the effective pixel number of 1280×1024 cannot be properly reproduced in the form of NTSC signals. The image displayed would lack an area of more than ¾ of the entire image. As is understood from this, there is a need to appropriately reduce the number of pixels so as to display substantially the entire area of the image. As described hereinafter, the present invention addresses and solves this problem in an efficient manner.
An image having the effective pixel number of 1280×1024 (square pixels) can be subsampled to reduce the number of both horizontal and vertical pixels by one half, respectively. Image signals whose horizontal and vertical pixels are both reduced by one half have the effective pixel number of 640×512, which is close to the number of square pixels of NTSC television signals. Further, since the horizontal and vertical pixels are reduced at the same rate, the pixel aspect ratio measured before and after the subsampling does not change, and the square pixels retain their shape. Although the image thus subsampled lacks about 26 lines if it is reproduced in the form of NTSC television signals, an image having the correct aspect ratio can be displayed on an NTSC television monitor. Lack of about 26 lines in the image is not of great significance, and it is found easy and most convenient to subsample the image to one half.
When image signals of 1280×1024 pixels recorded on the memory card 15 are to be displayed on an NTSC television monitor, the image signals may be subsampled when read out from the memory card so that the horizontal and vertical pixels are preliminarily reduced by one half, respectively, and then stored in the frame memory 1. For television monitoring, the entire pixel data stored in the frame memory may then be merely read out from the frame memory in the order of raster scanning. On the other hand, when image signals that have been stored in the frame memory 1 during photographing are previewed for confirmation, the image signals of 1280×1024 pixels stored in the frame memory 1 must be read out from the frame memory while subsampling the image signals. Accordingly, the frame memory device must have a video output mode for subsampling the image signals.
If the image signals having the effective pixel number of 1280×1024 and stored in the frame memory 1 consist of YCBCR 4:2:2 signals, they are stored in the addresses as shown in FIG. 16. When these YCBCR 4:2:2 signals are subsampled to one half in the horizontal and vertical directions and generated as raster scanned signals, the signals are read out by accessing every other ROW address, and accessing the COLUMN addresses in the following orders:Ym0→Ym2→Ym4→Ym6→Ym8→Ym10. . .   {circle around (6)}CBm0→CRm0→CBm4→CRm4→CBm8→CRm8 . . .   {circle around (7)}
When interlace scanning is performed in actual operations, ROW addresses must be accessed in the following orders.Odd-numbered field: 0→4→8→12→16 . . .   {circle around (8)}Even-numbered field: 2→6→10→14→18 . . .   {circle around (9)}
While the Y signals and C signals thus read out are generated in parallel with each other from the video bus, attention must be given to the phases of these signals. Although signals such as Ym2, Ym6, Ym10 . . . and signals such as CRm0, CRm4, CRm8 . . . must be generated in the same phase for the corresponding signals, as shown in FIG. 14 (the signals shown in FIG. 14 are not indicated as belonging to any specific scan line, and thus “m” that represents ROW address is not shown), the COLUMN addresses of areas of corresponding Y signal and CR signal are not the same. Since the address signals in the Y memory 6 and C memory 7 are usually derived from a common source, the Y signal and C signal that are output by a single memory access are read out from the same COLUMN address. Thus, where the Y signal and CR signal are stored in different COLUMN addresses as in the above case, only one of these signals can be read out by a single memory access. Storage of YCBCR 4:2:2 signals in the addresses shown in FIG. 16 within the frame memory involves such problems as described below.
(1) The COLUMN addresses in which Ym0 Ym4, Y8, Ym12 . . . are stored are the same as the COLUMN addresses in which CBm0, CBm4, CBm8, CBm0 . . . , are stored and these signals are effective even after subsampling the image signals to one half. Accordingly, the corresponding two signals may be simultaneously output to the video bus by a single memory access even if the address signals of the Y memory 6 and C memory 7 are derived from a common source. Ym2, Ym6, Ym10, Ym14 . . . and CRm0, CRm4, CRm8, CRm12 . . . are also effective signals after subsampling the image signals to one half. However, the COLUMN addresses in which these two kinds of signals are stored are not the same, and thus the corresponding Y signal and CR signal cannot be simultaneously output by a single memory access. The memory needs to be accessed three times rather than twice so as to read out four signals in total, i.e., two Y signals, one CB signal and one CR signal.
(2) When the address signals of the Y memory and C memory are derived from a common source, it is found that the memory needs to be accessed three times to read out the above-indicated four signals. In order to output the Y signal and C signal in phase and in parallel with each other onto the video bus, the rate of reading out data from the memory must be 1.5 times the data output rate of the video bus (or greater). This is because four signals are transmitted from the video bus with two clock pulses since the Y signal and C signal are in phase, and therefore the memory access must be performed three times within the period of two clock pulses. This situation is illustrated in FIG. 17, which shows that three cycles of memory read-out clock are included in a period of two cycles of data transfer clock for the video bus, and that signals equivalent to 3 words must be read out from the memory during the time in which the signals equivalent to two words (one word being a pair of Y signal and C signal output in parallel with each other) are generated from the video bus. X marks on the memory bus represent data that are not used. The signals on the video bus are delayed with respect to the signals on the memory bus so that the Y and C signals are synchronized with each other to be simultaneously generated from the video bus in parallel with each other.
When the data transfer clock for the video output has a clock speed of 12.2727 MHz as described above, the memory read-out clock needs to have a clock speed of about 18.40911 MHz (or higher), which is 1.5 times that of the data transfer clock. However, a clock having 24.5454 MHz, that is twice the speed of the data transfer clock, is often used, as the use of the 1.5-times clock is inconvenient. Thus, the memory read-out clock and video output clock can be conveniently obtained from the same source. Where the clock whose speed is twice that of the data transfer clock for the video bus is used for reading out signals from the memory, the above-indicated four signals are read out in a period of three clock cycles (one read-out operation is performed in one clock cycle), and the remaining 1 clock cycle provides an idling time. This situation is illustrated in FIG. 18, in which four cycles of memory read-out clock are included in a period of two cycles of data transfer clock for the video bus. Of these four cycles, the initial three cycles provide a period in which signals are read out from the memory, and the last one cycle provides an idling time in which signals are not read out. It is understood that data are not present in the idling time in which signals are not read out.
When a clock having 1.5 times or twice the speed of the data transfer clock for the video bus is used for reading out data from the memory, the data are read out from the memory at a rate that is 1.5 times or twice that of (data transfer of) the video bus, thereby causing such problems as an increase in the current consumed and difficulty in controlling the operation timing.
(3) Where the address signals of the Y memory 6 and C memory 7 are derived from a common source, data must be read out from the memory at a rate that is 1.5 times or twice the rate of data output of the video bus. If the address signals of the Y memory 6 are independent of those of the C memory 7, however, different addresses may be given to the respective memories, and thus the Y signal and CR signal that are stored in different COLUMN addresses may be simultaneously read out. The above-indicated four signals can be read out by accessing the memory twice. Consequently, the memory read-out rate can be made equal to the data output rate of the video bus. However, this arrangement requires separate sets of addresses for the Y memory 6 and C memory 7, which results in increased signal lines of the memory bus. In addition, separate address generators are needed for the Y memory 6 and C memory 7, which results in an increased scale of a relevant circuit.