The present invention relates to a display driving circuit, and particularly it relates to a technique useful for cutting the storage capacity of a memory for storing a display grayscale level of the preceding frame for driving a display driving circuit according to the overdrive technique, which improves the response characteristic of a display device, thereby to reduce the deterioration of image quality.
In recent years, it is been becoming common to display a television picture through one-segment broadcasting, hereinafter referred to as “ONE-SEG broadcasting” for short, with mobile devices including cellular telephones. In addition, electronic game software programs and the like, which are provided for cellular telephones, are growing. Therefore, the need for displaying a moving image clearly is increasing with cellular telephones.
In general, there has been known the OD technique—OD is an abbreviation for “OverDrive”—as a method for improving the response characteristic of a liquid crystal display. However, OD needs a frame memory in principle, and therefore it has been a problem that the memory increases the chip cost.
FIGS. 8A-8H are a series of diagrams for explaining the basic principle of OD driving, namely driving according to the OD technique.
For example, FIG. 8A shows a display image of the preceding frame, and FIG. 8B presents a display image of the current frame. The voltage applied to a portion of liquid crystal in a position 801 on the display screen is low before the current frame, and high in and after the current frame, as shown in FIG. 8C. However, the response speed of liquid crystal is slow and therefore, even if the applied voltage is switched at a high speed, a target brightness cannot be reached in one-frame period ( 1/60 seconds), and a shortage 802 of brightness is caused as shown in FIG. 8D because of a slow response speed of the brightness of a liquid crystal panel.
Hence, when the image of FIG. 8A is scanned in the horizontal direction, both ends of the image look blurred as shown in FIG. 8E. To improve the image blurring like this, a high grayscale level voltage prepared by adding a correction amount 803 to a grayscale level voltage 804 of the current frame as shown in FIG. 8F is applied to a frame with the display grayscale level changed. As a result, it becomes possible to converge the display brightness of the liquid crystal panel to a target brightness 806 in the period of one frame, as shown in FIG. 8G, and therefore a display image with less blurring can be displayed as shown in FIG. 8H. Now, it is noted that the correction amount 803 is an output of a function involving, as parameters, a grayscale level voltage 805 of the preceding frame, and a grayscale level voltage 804 of the current frame. As the grayscale level voltage and the display grayscale level are in one-to-one correspondence with each other, the correction amount 803 can be made an output of a function involving, as parameters, a display grayscale level of the preceding frame and a display grayscale level of the current frame. Hence, for execution of OD driving, the display grayscale level of the preceding frame must be stored, and therefore a frame memory is required.
On the other hand, to realize the OD driving at a low cost, a method by which the chip cost is cut by compressing data to store in a memory and then store the compressed data is adopted. An example of the method is disclosed by JP-A-2009-109835. However, such method has the problem that if OD processing is executed using an image prepared by quantizing (compressing) an image of a frame precedent to the currant frame and then decompressing the image, and a current image, which has not been compressed, a still image is judged to be a moving image owing to an error resulting from the compression, and the OD processing is performed on the still image, resulting in the deterioration of image quality of the still image. To solve the problem, JP-A-2009-109835 proposes a method to prevent the image quality of a still image from being deteriorated, by which compression and decompression are performed on not only an image of the preceding frame, but also an image of the current frame, and if a decompressed image of the preceding frame coincides with an decompressed image of the currant frame, OD processing is not executed.
Further, JP-A-2007-025528 discloses a technique for preventing the deterioration of the image quality of a still image by avoiding the execution of OD processing on a still image even in the case of adopting a pseudo grayscale level expressing method referred to as “FRC (Frame Rate Control)” for the still image. According to the technique, quantization-threshold-vicinity-judging data showing whether or not image data is a value in the vicinity of the threshold of quantization, and quantization data are prepared for a current frame and the preceding frame thereof, and a judgment on which of still and moving images the current frame belongs to is made properly. If the current frame is judged to belong to a still image, OD processing is not performed.
Further, instead of a display driving circuit, a moving-image-coding method, MPEG-4AVC (H.264)—one of international standard moving-image-coding methods using DCT (Discrete Cosine Transform)—is described by Thomas Wiegand et al., “DRAFT ITU-T Recommendation and Final Draft International Standard of Joint Video Specification (ITU-T Rec.H.264|ISO/IEC 14496-10 AVC)”, Joint Video Team (JVT) of ISO/IEC MPEG & ITU-T VCEG (ISO/IEC JTC1/SC29/WG11 and ITU-T SG16 Q.6), 8th Meeting: Geneva, Switzerland, 23-27 May, 2003, at http://www.h.264soft.com/download/h.264.pdf, provided that URL is a result of search as of Jun. 3, 2006. Incidentally, AVC is an abbreviation for “Advanced Video Coding”. The typical compression method for moving images referred to as “MPEG-2” is compliant with a standard standardized according to ISO/IEC 13818-2. MPEG-2 is based on the general rule that the video storage capacity and required band width are made smaller by removing redundant information from a video stream. Incidentally, MPEG is an abbreviation for “Moving Picture Experts Group”.
In an encode process of MPEG-2, a video signal is sampled and quantized in order to define color and brightness components of each pixel of a digital video first. Subsequently, values indicating color and brightness components are converted to frequency values using DCT (Discrete Cosine Transform). The transform coefficients resulting from DCT vary between the picture brightness and color in frequency. Thereafter, the quantized DCT conversion coefficients are coded by VLC (Variable Length Coding), by which a video stream is further compressed.
On the other hand, in coding according to MPEG-4AVC (H.264), a syntax element is coded by a highly efficient entropy coding (variable length coding). A syntax element is a piece of information which is conveyed by a syntax, such as a DCT coefficient or a motion vector. Further, in the case of MPEG-4AVC (H.264), a syntax, such as a DCT coefficient or a motion vector, is coded with an Exponential Golomb code—a universal code adopted for a highly efficient entropy coding (variable length coding).