This invention relates to systems and methods for authenticating products and in particular, systems and methods which use visual characteristics for verification.
Historically protection of items (under this term we will consider all physical objects of the real world) is based on the technologies, which use some features being either rare in nature or difficult to duplicate, copy or clone. Authentication is considered as the process of verification of added specific overt, covert or forensic features to the item that may verify the item as genuine. The examples of these protection technologies are numerous and include special magnetic taggants [1], invisible inks, crystals or powders with infrared (IR) or ultraviolet (UV) properties [2], optically variable materials, holograms and physical paper watermarks [3] etc. The verification of item authenticity is based either on the direct visual inspection of the added feature presence or special devices (detectors) that usually have proprietary character and are not assumed to be used in the public domain by the fear of disclosing the proprietary technology secrets behind the used technology and physical phenomena.
Among the major risk factors, the most important are threats related to the counterfeiting of items, refilling of original ones, tampering, illegal trading, production of look-alike items, illegal franchising or any mixture of the above ones.
The identification of items refers to the assignment of a special index to each item that can be used for their tracking and tracing. The assigned index is encoded and stored on the item in a printed form of a visible barcode [4] or invisible digital watermark [5] or overprinted sparse set of dots (usually yellow) [6] or specially designed storage device such as magnetic stripe, electronic smart cards or RFID [7]. Obviously, the information stored in such a way can be read and copied by any party even if the data are encrypted.
The security drawbacks of these approaches in authentication and identification applications are well known. Besides that the above techniques are mostly proprietary and kept secret by the security printing houses for years, they are still the most widely used. Moreover, some of these techniques are also quite expensive for mass usage. Contrarily, even the usage of the most advanced cryptographic techniques in the above identification protocol does not help a lot since the data are easily copied without the need to be decrypted.
Additionally, new storage devices (electronic chips, RFIDs) are still quite expensive for large-scale applications. Sometimes, there is also no possibility to embed such a device into the item structure or their presence is not acceptable due to various legal, commercial, marketing, ecological and technical reasons.
Moreover, another security drawback of both technologies is their adhesiveness. The protection mechanisms considered above are added to the item as independent objects or features, sometimes changing the properties, look, design and value of item. This has very serious complications that were not considered in early protection systems. First, the added feature has nothing to do with the actual item and its unique features and physical properties. Secondly, all these protection features can be relatively easily reproduced by modern means.
At the same time, it is well known for years that all objects and humans are unique due to the possession of special features that are difficult to clone or copy. These unclonable features are random microstructures for the physical object surfaces and biometrics (fingerprint, iris, etc.) for the humans. The unclonable features are formed by nature and naturally integrated into the items. Having a lot of advantages and being non-adhesive in the considered sense, nevertheless the unclonable features only recently become a subject of intensive theoretical investigation mostly thanks to the progress achieved in the design of cheap high resolution imaging devices.
For the sake of generality, we will define physical unclonable features (a.k.a. fingerprinting in some contexts) as unique features carried out by the objects, products or documents. The main properties of such features are: (a) they can be extracted and evaluated in a simple way, but (b) they are hard to characterize and (c) in practice cannot be copied (cloned). The unclonable features are based on the randomness created by nature that is present in practically all physical structured observed under the coherent or noncoherent excitation (light) in transparent or reflective modes. Sometimes, this randomness can be hand-made. The examples of unclonable features include the microstructures of paper, metal, plastic, fibers, speckle, wood, organic materials, crystals and gemstones, complex molecular structures etc. Therefore, the main application of unclonable features is anti-counterfeiting for identification and authentication purposes.
Although the robustness-invariance aspects of identification/authentication problems have received a lot of attention especially in computer vision, the issue of security still remains to be an open and little-studied problem. This aspect will potentially have a great impact on security applications, such as content, object, person authentication and identification, tamper evidence, synchronization, forensic analysis and brand and art protection.
The design of efficient identification/authentication techniques based on the unclonable features is a very challenging problem that should address the compromise between various conflicting requirements that cover:                robustness to distortions or reliability, i.e., the ability of identification/authentication function to produce accurate results under the legitimate distortions applied to the same data that include both signal processing and desynchronization transformations;        security, i.e., the inability of attacker to reproduce the physical unclonable feature or to trick the identification/authentication process using the leaked information about the codebook, decision rule, etc.; this also includes the one-way property or non-invertibility property similar to hashing functions, i.e., computationally expensiveness in finding original data given an index and a codebook construction, and collision-free property, which refers to the fact that given an input and a identification/authentication function, it is computationally hard to find a second image such that produces the same result outside the region of allowable distortions;        complexity, i.e., the ability to perform the identification/authentication with lowest achievable computational complexity without the considerable loss in accuracy;        memory storage, i.e., the memory needed to store the codebook or features used for the classification;        universality, i.e., the practical aspects of optimal identification/authentication construction under the lack of statistics about input source distribution and channel distortions including geometrical desynchronization.        
The above requirements are quite close to a robust perceptual one-way hash function [8,9]. However, in the scope of this invention we will advocate a different more general approach where the robust perceptual hashing can be considered as a particular case. This approach is based on the secure low-complexity multiple hypothesis testing in the secret domain defined by the key. This approach is also efficient in terms of memory storage requirements and universal in terms of priors about the source distribution.
Microstructures and Randomness
The possibility of identification and authentication of products, documents, objects and articles by analyzing difficult-to-duplicate, heterogeneous/inhomogeneous or “random” microstructure associated to the genuine item has been investigated in various contexts. The microstructures have a random character and are unique for all items. The random pattern of microstructure is highly determined by the physical properties of materials (paper, metal, plastic, etc.) and the method of data acquisition. The methods of acquisition should be considered in the broad sense and include optical, acoustic, mechanical, electro-magnetic and other principles. However, in the scope of this invention we will focus on the optical techniques due to the wide distribution of cheap and highly-performant cameras and their presence in the most of scanners, mobile phones, PDAs and webcams that makes the acquisition cheap and available for virtually any person. The optical techniques used for the acquisition of microstructures can be divided on groups depending on the use of coherent or incoherent light, reflected or transmitted light, 2D (planar) or 3D (volumetric) imaging and light spectral band (visible, IR or UV). In fact it is possible to use any device being able to reproduce the randomness.
The coherent light acquisition assumes the presence of the synchronized source of light that is generally archived using lasers. It allows forming a so-called speckle pattern of microstructures. However, the need of lasers requires the design of special devices that seriously restricts their use in portable devices. That is why we will consider only incoherent regime that is achieved by observing the object under the normal daylight conditions.
The reflected light imaging is typically used in the simple setups when the object surface reflects the incident light and the imaging device registers the reflected light. The transmitted light imaging refers to the case when the incident light is registered on the opposite side of the item by the imaging device. Technologically, both modes can be used in practice. However, the transmitted light imaging can be used only with optically transparent or semi-transparent items or using special wavelengths such as IR or x-ray.
The 3D acquisition is generally more informative compared to the 2D case. However, it requires more complex equipment including coherent sources of excitation and transparency of items. Additionally, 3D imaging needs more memory for storage and transmission bandwidth. For the portable communication devices, the MMS messaging is already a commonly supported standard. Therefore, to enable the fast introduction and distribution of the proposed technology, we will focus on 2D imaging. However, it should be pointed out that the proposed methods can be also successfully used with the existing 3D imaging technologies.
Finally, the spectrum in which microstructure image is acquired is also quite broad and includes visible, IR, UV, millimeter and x-ray bands. In principle, the modern CCD and CMOS arrays are tuned to have the best sensitivity for the optical band. However, some arrays have also acceptable sensitivity in the IR band simultaneously. Therefore, both visible and IR bands can be used at the same time.
The examples of natural randomness are numerous and we will consider the most typical and indicative ones. J. Brosow and E. Furugard described the usage of random imperfections in or on the base materials of objects for authentication purposes [10]. The device that converts the imperfection associated to the object into binary code is also described. A similar possibility of using a distinguishing physical or chemical characteristic of an article was disclosed in [11]. As the distinguishing characteristic, the invention proposed to consider the micro-topography of the article surface. The multiview scanning under different illumination angles was described. The micro-topography characteristics were observed in the reflected and transition modes for different wavelengths and different registration methods for various materials including metal, wood, organic materials, living cells, crystals and gemstones, etc. The targeted application concerned an article authentication where the feature vector was extracted from the image with the subtracted mean or from the image obtained under the different viewing conditions. The redundancy reduction was applied to produce about 900 features that are asymmetrically encrypted and encoded for the reproduction in the form of a barcode on the article surface. At the authentication stage, the features extracted from the surface of the article to be authenticated are compared with those extracted from the printed barcode to deduce the decision about the authenticity based on a cross-correlation score. The disclosed method uses the intrinsic synchronization due to the precise article placement and graphical designs. The authentication can be performed using typical scanner modified to ensure illumination at different angles. A similar approach was also disclosed in the invention of Thales [12] and the serious of patents from Ingenia Technology [13-16], where the coherent radiation and microscope were used for the microstructure registration in visible, IR and UV bands. Besides a similar approach to the article authentication, the above inventions also describe the identification of items mostly referring to paper and cardboard articles. Similarly to the previous approach about 500 features are extracted from the microstructure image using various techniques including image downsampling to form a digital signature that is stored in the database jointly with the entire image for further visual expectation. The identification is performed by the exhaustive search in the entire database using the cross-correlation of the extracted signatures with those stored in the database. Similarly to the previous invention, the features are extracted without imposing any constraints on the security. One more invention describing the coherent acquisition of microstructure images was suggested by De La Rue [17]. Similar invention based on the optical microscope acquisition was proposed in [18], which proposes acquiring the image of a small area of the article to be protected such as painting, sculpture, stamp, gem or specific document. The forensic analysis of added features such as grain structure of toner particles was described in [19].
The technique for the article authentication and identification was also described in the inventions patented by the Escher Group [20-26]. Two last inventions refer to the three-dimensional structures while the rest of the applications describe the planar (two-dimensional) ones. The microstructure of article surface is considered as a unique link between the article and the database that can be used for various security applications including tracking and tracing as well as document management systems. Similarly to the previously considered inventions, the decisions about the authenticity and the identification are based on the cross-correlation score obtained from the extracted features deduced from the microstructure data using dedicated scanning device. The added template and graphical design are used for the synchronization. In the paper [27] of the same authors, the performance analysis was performed under Gaussian assumptions about the statistics of microstructures and noise that is also far from being realistic. For the large databases, the normalized cross-correlation is not also feasible approach.
The further extension of these approaches was accomplished in the inventions of Fuji Xerox Co. and Alpvision where mostly the article identification was disclosed. The Fuji Xerox invention [28] describes the architecture and means for document verification based on nonreproducible features extracted and stored on the secure server at the enrollment stage. At the verification stage the extracted features are matched with those stored on the server using normalized cross-correlation as a measure of match score. The feature information is considered to be at least one of information indicating a scattering state of an image forming material for both the reflected and transmitted through lights. The similar invention of Alpvision [29] investigates the microstructures of different surfaces under various scanning resolutions in non-coherent light with the explicit synchronization for a given matching metric considered to be again the cross-correlation score. The matching is applied either to the compressed or downsampled images of microstructures to accelerate the matching process. Additionally, contrarily to the previous inventions where the matching was performed over the entire database as a form of the exhaustive search, the invention of Alpvision advocates a tree-based search with the tree clustering based on the downsampled images. A particular method of rotation compensation prior to the cross-correlation is presented based on angular unwarping of image spectrum. The memory storage requirements remain quite high for the considered scanning resolutions and the number of cross-correlations is too large even in the considered tree-search approach for the large number of entries. The storage of the scanned images in the direct non-protected compressed form also raises serious security concerns. Similar tree search based on the hierarchical multiresolution data clustering was proposed in the application of David Sarnoff Research Center. Inc. [30] and the cross-correlation matching was advocated in [31].
Keeping in mind the security concerns, the techniques based on physical one-way functions should represent a reasonable alternative to the above approaches. The physical one-way functions are a sort of a physical analog of cryptosystems and try to mimic the cryptographic hashing functions. For example, P. S. Ravikanth [32] describes such a one-way function for the there-dimensional probe authentications in the coherent light. The speckle image is decomposed into multiresolution representation using Gabor transform, where the robust features have been extracted to form a 2400 bit hash.
Being known and well recommended in the laboratory conditions, the natural randomness identification/authentication has not unfortunately become a common technique in practice. Among the main reasons that seriously restricted its usage was the need to use a special device for the acquisition. Under these circumstances, all advantages of unclonable microstructures have been reduced to almost proprietary usage. Being acceptable for the laboratory examinations, it is not however the solution for the ordinary people when the identification and authentication should be performed quickly, at any time, without special devices, extra training and cost. Additionally, the need for the secure storage of the database and the disclosure of publicly extracted features used for the identification/authentication raises the serious concerns for the security aspects of such systems.