Conventionally, to prevent unauthorized copy and alteration of an image, researches for embedding specific information in the image have been extensively made. This technology is called digital watermark. For example, in a digital image of, e.g., a photograph or picture, additional information such as the name of the copyright holder and the use permission is embedded. Recently, a technology is beginning to be standardized by which additional information is so embedded in an original image as to be visually inconspicuous, and the image is circulated across a network such as the Internet.
Another technology is also being studied which can specify additional information such as the type and machine number of printing device which has printed an image, from a printed product such as paper on which the image is printed. These technologies are used to prevent forgery of, e.g., paper money, stamps, and securities, resulting from improved image quality of image forming apparatuses such as copying machines and printers.
For example, Japanese Patent Laid-Open No. 7-123244 has proposed a technology which embeds additional information in high-frequency regions of a color difference component and saturation component in an image to which humans have low visual sensitivity.
Unfortunately, it is very difficult for the conventional methods as described above to embed sound information and other large-volume information in an image such that these pieces of information are unobtrusive when the image is printed.
As a means for solving this problem, therefore, in Japanese Patent Laid-Open No. 2001-148778 the present applicant has proposed a method which uses a texture generated by error diffusion to artificially form a combination of quantized values which are not generated by normal pseudo halftoning, and embeds the formed code in an image. In this method, only the shape of the texture microscopically changes, so the image quality visually remains almost unchanged from the original image. Also, different types of signals can be multiplexed very easily by changing the quantization threshold value in error diffusion.
A conventional image processing system which prints an arbitrary image by embedding additional information in the image and extracts the embedded image from the printed image will be explained below. FIG. 19 is a block diagram showing the arrangement of a conventional image processing apparatus which embeds additional information in an arbitrary image and outputs the printed image. Referring to FIG. 19, an input terminal 191 inputs given multi-level image information, and an input terminal 192 inputs additional information to be embedded in this image information. This additional information can be the copyright, photographing date/time, photographing location, photographer, and the like pertaining to the input image information from the input terminal 191, or can be sound information, text document information, and the like irrelevant to the image information.
An additional information multiplexer 193 embeds the input additional information from the input terminal 192 into the input image information from the input terminal 191, such that the additional information is visually inconspicuous. That is, this additional information multiplexer 193 segments the input image into given N-pixel square blocks and embeds the additional information in each block.
The image information in which the additional information is embedded by the additional information multiplexer 193 is printed on a printing medium by a printer 194. Note that this printer 194 is, e.g., an inkjet printer or laser printer capable of expressing a tone by using pseudo halftoning.
FIG. 20 is a block diagram showing the arrangement of a conventional image processing apparatus which extracts the embedded additional information from the output printed image from the image processing apparatus shown in FIG. 19. Referring to FIG. 20, the image information printed on the printing medium is read and converted into image data by an image scanner 201. This image data is input to an additional information separator 202.
The additional information separator 202 uses a known image processing method to detect an image region in which the additional information is embedded. A representative method of this detection is to detect the boundary between a non-image region and an image region by a density difference. After thus detecting the image region, the additional information separator 202 separates the embedded additional information from the region. This separated additional information is output from an output terminal 203.
Unfortunately, the above-mentioned conventional method has the following problems.
First, the method of detecting an image region by a density difference in an image cannot detect the boundary of an image region from some image information input from the input terminal 191. FIG. 21 shows an example of a printed image in which the boundary of an image region is unclear. As shown in FIG. 21, in a printed image in which the upper portion of an image region has substantially the same color as the paper color (e.g., white) of a printing medium, the aforementioned method of detecting an image region by a density difference cannot detect the upper boundary line.
Even in this image in which the upper boundary line is obscure, additional information is inconspicuously embedded in the image information, so the image region must be correctly input. However, when trimming the printed image with an image scanner, the user cannot tell the range to be trimmed.
In addition, the conventional method segments an input image into N-pixel square blocks and multiplexes additional information in each block. Therefore, the additional information separator 202 must detect the coordinates of each block with an error of at least about a few pixels. If this error increases, the detection accuracy of additional information significantly lowers, and this makes accurate restoration of the additional information difficult.
Furthermore, image data read by the image scanner 201 is influenced by various error factors. For example, even when a printed image which is set without any skew is read by the image scanner 201, error factors such as expansion or contraction of the printing medium and distortion caused by the driving system or optical system of the image scanner 201 are produced.
Accordingly, Japanese Patent Laid-Open No. 2-35869 is proposed as a method of calculating a skew of an image read by an image scanner or facsimile apparatus. In this invention, a reference line is drawn beforehand in a predetermined portion of an original to be read such that the line is parallel to the original. When the original is read, a skew of this reference line is detected and corrected.
Also, Japanese Patent Laid-Open No. 5-207241 is proposed as a method of detecting a skew and expansion or contraction of an original and the degree of unevenness of paper feed. In this invention, reference marks are printed beforehand in two predetermined portions of an original to be read. When the original is read, the positions of these two reference marks are detected, and the distance between the reference marks are measured. In this manner, a skew and expansion or contraction of the original and the degree of unevenness of paper feed are detected.
These conventionally proposed methods as described above can correct a skew to a certain extent. However, they cannot estimate a specific position in an image region with an error of about a few pixels or less.
In the above-mentioned inventions, input image information is segmented into square blocks each having a certain size (e.g., N×N pixels). Additional information is embedded in the image by microscopically changing the texture in each block in accordance with the bit code of the additional information, and output as a printed image. This additional information can be restored by reading the printed image by an optical reader such as an image scanner, converting the read image into a digital image, and analyzing the frequency component of the texture of the image in each square block of N×N pixels.
Also, when an image in which additional information is embedded by the method as described above is printed and this printed image is read by using an optical reader such as an image scanner, nonlinear distortion is produced in the read image information owing to parts such as a lens and driving system. This distortion conventionally makes it impossible to correctly estimate the position of a square block in which the additional information is embedded in the image, by using only the four apex coordinates of an image region.
The present applicant, therefore, has proposed a method of adding a reference frame when an image is printed. This reference frame is formed around an image region and has a width of one pixel or more. On this frame, gaps of a few pixels are formed at predetermined intervals as image correction marks. After a printed image is read by an optical reader such as an image scanner, these image correction marks are detected from the reference frame, and the position of the read image and the like are corrected. After that, the additional information is restored. This corrects nonlinear distortion generated when an image is read by an optical reader such as an image scanner.
An image processing system previously proposed by the present applicant will be explained below. This image processing system includes an image processing apparatus for embedding additional information in an image and printing the image by adding the reference frame described above, and an image processing apparatus for restoring the embedded additional information from the printed image. FIG. 38 is a block diagram showing the arrangement of the previously proposed image processing apparatus which embeds additional information in an image and prints the image by adding the reference frame.
Referring to FIG. 38, image information D7 input from an input terminal 181 is converted into image information D8 having a size of H (vertical)×W (horizontal) pixels which is the resolution of printing by an image forming unit 183. The values of H and W are fixed to predetermined values beforehand or take values indicated by
      {                                        W            =                          2              ⁢              Qw                                                                        H            =                          2              ⁢              Qh                                                  (where w and h are the numbers of pixels in the horizontal and vertical directions, respectively, of image information D5, and Q is a constant larger than the maximum value of read errors of an optical reader such as an image scanner or of errors produced by expansion or contraction of a printing medium, more specifically, a constant which is an integral multiple of N as the dimension of one side of a square block segmented in the image information D7) in order for the image processing apparatus for extracting additional information from a printed image to accurately specify these values. Note that any known method such as nearest neighbor interpolation or linear interpolation can be used as the conversion of image information in the image forming unit 183. The converted image information D8 is input to an additional information multiplexer 184.
This additional information multiplexer 184 embeds in the image information D8 additional information x2 input from an input terminal 182. Image information D9 in which the additional information x2 is thus embedded by the additional information multiplexer 184 is supplied to a reference frame addition unit 185 where information concerning a reference frame for use in image correction is added to the image information. The obtained image information is output as image information D10. This image information D10 is printed on, e.g., a paper sheet as a printing medium by a printer 186, thereby obtaining a printed image 187.
FIG. 39 is a block diagram showing the arrangement of the previously proposed image processing apparatus for reading a printed image and extracting additional information. Referring to FIG. 39, the printed image 187 printed by the image processing apparatus shown in FIG. 38 is read by an image scanner 391 to obtain image information D21. This image information D21 is input to a block position detector 392. The block position detector 392 first obtains the coordinates of the four apexes of an image region which is a rectangular region in the image information D21, and then calculates a dimension W′ in the horizontal direction and a dimension H′ in the vertical direction of the image region from the distances between these apexes. Owing to optical distortion and the like during reading, the dimensions W′ and H′ of the image region actually read by the image scanner 391 contain errors a and b as indicated by
  {                                          W            ′                    =                                    2              ⁢              Qw                        ±            a                                                                    H            ′                    =                                    2              ⁢              Qh                        ±            b                                 
Since, however, Q is so set as to be larger than the maximum values of these errors a and b, W′ and H′ can be converted into W and H, respectively, by quantizing W′ and H′ by 2Q.
In addition, on the basis of the calculated W and H, the block position detector 392 detects the positions of the image correction marks formed on the reference frame, thereby specifying the position of a block in which the additional information x2 is embedded. An additional information separator 393 separates and restores this additional information x2 on the basis of the specified block position information. The restored additional information x2 is output from an output terminal 394.
Unfortunately, the above-mentioned conventional method has the limitation that the image dimensions W and H after size conversion must be integral multiples of 2Q. Also, the constant Q must be larger than the maximum value of read errors caused by optical distortion generated during reading by the image scanner or the like or caused by expansion or contraction of the printing medium.
For example, assume that an image having a size of 2,000×2,000 pixels after conversion is printed by a 600-dpi printer and read by an image scanner for general consumers which has the same resolution of 600 dpi, and additional information is restored from the read image. If the size of the image read by the image scanner contains errors of a maximum of 50 pixels, the image is read as an image within the range of 1,950 to 2,050 pixels in both the vertical and horizontal directions. When the accuracy is like this, adjustment must be performed every 100 pixels in order to convert the size of the image information to be printed by the printer. That is, when a 600-dpi printer is used, image size adjustment can be performed only about every 4 mm. This significantly deteriorates the ease with which the user uses layout editing during printing.