1. Field of the Invention
The present invention relates to an image distortion correcting device, and more particularly, to device and method of correcting distortion occurring at the time of displaying a high-resolution image on a low-resolution display unit.
2. Related Art
Mobile communication devices including a camera module or digital cameras display an image taken via a lens on a display module and store the image in a storage medium such as a memory. A basic configuration of a camera display unit included in the mobile communication devices or the digital cameras is shown in FIG. 1.
FIG. 1 is a diagram illustrating a basic configuration of a camera display unit and FIG. 2 is a diagram illustrating image conversion formats until a taken image is displayed on a display module.
Referring to FIG. 1, the camera display unit includes a camera module 10, a camera control processor 20, and a display module 30. The camera module 10 includes a lens 11, an image sensor 13, and an image signal processor 15.
Light from a subject is imaged through the lens 11 and is transmitted to the image sensor 13.
The image sensor 13 reproduces an image using a characteristic that semiconductor is sensitive to light. The image sensor 13 includes an array of small photosensitive diodes called pixels. The pixels sense the light intensity and wavelengths resulting from the subject, read the light intensity and wavelengths as electric values, and amplify the electric values into levels that can be processed. That is, the image sensor 13 is a semiconductor device converting an optical image into electrical signals.
In the image sensor 13, plural pixels are arranged in a two-dimensional structure and the respective pixels convert the intensity of incident light into electrical signals. By measuring the electrical signals, the intensity of light incident on the pixels can be acquired and an image in the unit of pixels can be constructed using the electrical signals.
Since the respective pixels of the image sensor 15 generally extract pixel data of a single color of plural colors included in the image, information on the lost pixels should be estimated from information on the pixels around the lost pixels using a color filter array (CFA). The color filter array has a structure in which color filter elements allowing each pixel of the pixel array to transmit only light exhibiting a single color are regularly arranged. The color filter array may have various patterns depending on the structure in which the color filter elements are arranged. An RGB Bayer pattern is most widely used. Here, R means red, G means green, and B means blue.
A half the total number of pixels is assigned to green (G) and quarters of the total number are assigned to red (R) and blue (B), respectively. Each color image pixel has a repeated pattern of a red, green, or blue filter to acquire color information. For example, the Bayer pattern has a 2×2 arrangement.
The electrical signals based on the Bayer pattern in the image sensor 13, that is, raw data having a Bayer format, are transmitted to the image signal processor 15. Here, it is assumed that the raw data has an A×B resolution which is the resolution of the taken image.
The image signal processor 15 converts the raw data having the Bayer format into interpolated RGB data obtained by interpolating the raw data so as for the pixels to have red, green, and blue pixel data, respectively. The image signal processor 15 converts the interpolated RGB data into YUV data and transmits the YUV data to the camera control processor 20. Here, it is assumed that the YUV data has a C×D resolution smaller than that of the raw data as the taken image.
The YUV data has a format based on a characteristic that an eye is sensitive to luminance. Y represents the luminance and U and V represent the chrominance. The YUV data may have formats such as YUV422, YUV420, and YUV411, which are based on the number of bits assigned to the constituent values. For example, YUV422 means Y:U:V=4:2:2.
The camera control processor 20 reduces the resolution of the YUV data transmitted from the image signal processor 15 into an E×F resolution so as to correspond to the size of the display module 30. The camera module processor 20 converts the YUV data into RGB data and transmits the RGB data to the display module 30 so as to display the image data on the display module 30.
The taken image having the A×B resolution is reduced for display on the display module 30 by a reduction conversion. In the course, great distortion may occur at the time of reducing and displaying the taken image on the display module 30.
In the past, a technical solution for correcting physical properties of the image sensor and image distortion occurring at the time of converting data was studied.
Specifically, the physical distortion (due to lenses, mechanisms, and the like) occurring at the time of manufacturing the camera module 10 using the image sensor 13 and the image distortion occurring at the time of converting data (converting the raw data into the YUV data) mainly attracted attention. The luminance signal Y is used in correcting the image distortion in the YUV data output from the image signal processor 15. This correction may be effective in approaching the taken image, but there is a problem that a distorted image is actually displayed on the display module 30.
Specifically, in most cases, the resolution of the display module at the time of taking an image with an SXGA (1.3 M, 1280×1024) image sensor is a VGA (640×480) class or less. Accordingly, at the time of previewing the taken image, the camera module outputs the image with a 640×480 or 800×600 resolution, not the SXGA resolution. At this time, the image distortion occurs.
FIG. 3 is a diagram illustrating an example of an image reducing conversion and FIG. 4 is a diagram illustrating a distortion phenomenon occurring in the reduced image.
When the input image is reduced to ¼ in the course of image reducing conversion, 4×4 pixel data can be converted into 1×1 pixel data. Referring to FIG. 3, it is assumed that the 4×4 pixel data and the 1×1 pixel data have the YUV422 format.
In this case, Y1′, U1′, V1′, and Y2′ of the 1×1 pixel data are as follows.
Y1′=(Y1+ . . . +Y16)/16
U1′=(U1+ . . . +U16)/16
V1′=(V1+ . . . +V16)/16
Y2′=(Y17+ . . . +Y32)/16
The pixel data of the image converted by the image reducing conversion is different from the raw data of the taken image. Accordingly, there is a problem that the distortion like a contour line shown in FIG. 4 finally occurs.