Automatic recognition and identification of objects, and embedding of information in objects for this purpose, are well-known problems in electronic imaging. Applications include industrial inspection and quality control, shipping and transportation, security and counterfeit detection. In particular, the latter category, also referred to as object authentication, is important to any industry dealing with valuable or precision-made objects.
Common methods of embedding information that can be used to confirm the authenticity of an object are signing, tagging or otherwise marking objects of interest. For automatic purposes, a simple 1D or 2D (such as Quick Response or QR) barcode may be attached, stamped or engraved on objects of interest. In another example, radio frequency identification (RFID) tags may be attached or embedded in the objects. In both cases, the added tags are encoded with identifying information that is transmitted to the recipients by some other means, and can be simply decoded by well-known techniques to verify the identity of the sender. This is an important distinction, in that the above mentioned techniques do not, strictly speaking, verify the authenticity of the object. The tags could in principle be placed onto unauthentic objects, or be faked by other persons familiar with barcode or RFID technology, especially because the presence of such tags is easily detectable. Also, most tagging techniques require at least temporary modification of the object, with tags that could be perceived as extraneous, altering object appearance and aesthetics, which may be undesirable due to the size or value of the object.
In objects such as printed documents, banknotes, checks, etc. watermarking techniques can be used for embedding authenticating information. In this case a special paper with the watermarks is required, which makes it more expensive. Because the watermarks themselves are visible, they can be a target for counterfeiting attempts. Additional disadvantages of this approach are the facts that watermarks contain a very limited amount of information and cannot be changed quickly.
Digital watermarking and steganography overcome the visibility of embedded information, and rigidity issues such that the embedded information can be decoded from the electronic file itself or after scanning the document using a computer program. However there are several limitations of digital watermarking and steganography. The methods can only be used for digital images and digital data, so the extraction of the information occurs via scanning printed documents or analysis of electronic files and cannot be used on other physical objects. The amount of information in these methods is typically limited to 32- or 64 bits. Extracting information in these methods require sophisticated computational algorithms running on fast computers. If the digital images or documents were resized or otherwise edited before printing, embedded information can be distorted and consequently inaccessible. Digital watermarking and steganography methods are discussed in the literature and in textbooks, for example in Digital Watermarking and Steganography by Cox, Miller, Bloom, Fridrich and Kalker, Morgan Kaufman, MA, 2008.
Other methods of identification and authentication rely on precise optical characterization of the object. Many non-contact, non-destructive methods have been suggested for optical scanning and recognition of three-dimensional (3D) objects. In particular, methods relating to the present invention employ patterned illumination that is projected onto the object. The unique interaction of the illumination pattern with the object is then analyzed to determine the nature of the object present. In one example, “Three-dimensional object recognition by Fourier transform profilometry”, by Esteve-Taboada et. al., Appl. Opt. (38), 4760-4765 (1999) describes a method for recognizing three-dimensional objects that combines the techniques of Fourier transform profilometry (FTP) and the Joint Transform Correlator (JTC). In the FTP method, a periodic grating is projected onto a surface or object using a projector, and its image (the reference image) is detected by a camera. The recorded image is subjected to Fourier analysis to determine the depth profile of the object.
In the JTC setup, the reference image is sent to a spatial light modulator (SLM), which is further placed inside a Fourier Optical Processor. The optical Fourier transform of the content displayed on the SLM appears at the output plane of the processor, which in turn is detected by a second digital camera. For recognition, the reference image is present on the SLM, while the object to be tested is placed into the FTP setup used to record the reference image. The image of the test object is now also sent to the SLM inside the JTC, alongside the reference image. This results in the cross-correlation of the reference object and the test object to be output by the JTC, which can be analyzed for recognition purposes by detecting correlation peaks at the output. Although useful, this technique has serious shortcomings when considered for the authentication problem, which include: increased speed of processing at the expense of increased complexity of the hardware; the possession of the reference object is required; the properties of the object shape are not taken into account in designing the best projected pattern, which is restricted to a periodic grating; the method is not extensible to colored objects; for the method cannot be easily extended to visual (human) authentication; and the method is not easily extended to a sequence of objects.
U.S. Publication No. 2003/0231788 (Yuhkin et. al.) describes techniques for high-speed observation and recognition of an object that is within, or passes through, a designated area, using 3D image data and a variety of 3D image capture techniques. The image capture techniques can include, but are not limited to, structured illumination. The method is based on the generation of feature vectors, which must be compared to a database for recognition.
A method termed inverse fringe projection is described by Bothe et. al. in the paper “Compact 3D-Camera” (Proc. SPIE vol. 4778, 48-59, 2002). The image of the patterned illumination reflected from the object is recorded by a digital camera. This image is mathematically inverted, such that when projected back at the object from the position of the capture, it reflects from the object to re-create the original illumination pattern at the site of the original projector, provided that the original object is present in its original position and orientation. The authors describe the use of such a system in manufacturing defect detection and quality control. Unlike previous methods, the inverse fringe projection method takes into account information about the shape of the reference object. The authors of the describe its use for compensation for the distortions of a projected image when projecting onto non-standard surfaces, such as brick walls or corners in public spaces.
A similar technique is described in “Inverse Moire” by Jacques Harthong and Axel Becker, SPIE Vol. 3098, 1997, as a method of moiré metrology, where the shape of the object is measured by projecting a specific grid, inverse moiré, computed based on the knowledge of the object shape, allowing to analyze small deformations from a known mean shape with simple fringe processing. This contrasts to the standard moirémetrology approach wherein a pattern of parallel straight lines is projected onto the object surface, and the resulting pattern is analyzed using well known, though complex, fringe analysis techniques.
Structured light patterns are widely used for shape reconstruction, as described for example, in “A state of the art in structured light patterns for surface profilometry”, by Salvi, Fernandez, Pribanic, and Llado, 2010, Pattern Recognition 43 (2010) 2666-2680. The process of shape reconstruction using structured light is considered one of the most reliable techniques to recover object surfaces. To accomplish this goal, a calibrated projector-camera pair is used and, a light pattern is projected onto the scene and imaged by the camera. Correspondences between projected and recovered patterns are found and used to extract 3D surface information. The projected pattern creates an illusion of texture on the surface of an object, thereby increasing the number of correspondences. They are chosen such as to uniquely codify each pixel position in the image and consequently, on the object surface.
A variety of structured light patterns to enable discrete and continuous codification for still and moving objects have been proposed. The attributes of the patterns employed for 3D surface reconstruction are a number of projected patterns, pixel depth, which is referred to the color and luminance level of projected pattern, periodicity of the set of patterns and others. Typical patterns consist of stripes (black and white or colored), sinusoidal gratings, luminance gradients with periodic or fixed spatial or temporal structure, and others. What is important, in all these cases patterns are used to obtain correspondences between object surface locations and pixels in the captured images, where subsequently the depth map of the object surface is reconstructed from pattern deformations using ray tracing and triangulation techniques. Unlike inverse fringe projection techniques, these patterns are not adapted to reflect 3D properties of a particular static or moving object, but are used as a means to reconstruct depth maps of object surfaces.
While these approaches take into consideration 3D surface information in designing projection patterns, they cannot be directly applied for authentication problems that are based on encoding information utilizing surface, embedding information into a surface and retrieving this information as a means of authentication for a variety of products including printed documents. Therefore, there is a need for methods to authenticate an object based on an automatic, noninvasive examination of its features and embedded information that can be concealed and modified as required.