The present invention relates to a method for the optical identification of objects in motion.
Throughout the following description and the following claims, the expression: “optical identification” is used to indicate the acquisition and reading of coded information of an object (for example distance, volume, overall dimensions, or object identification data) for example through the acquisition and processing of a light signal diffused by the same object. The term: “coded information” is preferably used to indicate the whole identification data contained in an optical code. The term: “optical code” is used to indicate any graphical representation having the function of storing said coded information.
A particular example of optical code consists of the linear or two-dimensional codes, wherein the information is coded through suitable combinations of elements with a predetermined shape, for example squared, rectangular or hexagonal, of dark colour (usually black) separated by clear elements (spaces, usually white), such as barcodes, stacked codes, two-dimensional codes in general, colour codes, etc. The term “optical code” further comprises, more generally, also other graphical patterns with information coding function, including clear printed characters (letters, numbers, etc.) and special patterns (such as stamps, logos, signatures, fingerprints, etc.). The term “optical code” also comprises graphical representations which are detectable not only in the field of visible light but also in the range of wavelengths comprised between infrared and ultraviolet.
For the sake of making the following explanation easier, explicit reference to linear and two-dimensional codes shall be made hereinafter.
Systems for conveying and sorting packs, luggage and more in general, objects, are commonly used in the field of transports and logistics. In these systems, the objects are placed on a conveyor belt in motion and sorted on the basis of the reading of an optical code printed on a label associated with each object.
In the past, when there were only linear codes, reading the optical codes was carried out by scanning a laser light beam emitted by a dedicated laser reader onto the optical code.
With the coming of two-dimensional codes, the use of digital cameras typically using CCD/CMOS sensors has become widespread. Such cameras allow a greater usage flexibility. Indeed, they are capable of reading both traditional linear codes and two-dimensional codes, as well as other types of codes, besides offering additional functions such as OCR (optical character recognition).
A problem in object conveying and sorting systems is that of distinguishing objects that may even be very close to each other, so as to associate each object with the content of the corresponding optical code.
In those systems using laser readers, the problem of the correct association between object and respective code is solved for example by a system of the type described in EP 0 851 376 and EP 1 363 228. In such system, the laser reader that performs the scanning for reading the optical code also measures the distance and the angular position of the optical code within the scanning, thus providing the polar coordinates of the optical code with respect to the reader. The position of the optical code in the space with respect to a fixed reference system is obtained from these data, the position of the reader with respect to such a fixed reference system being known. A photocell barrier or other outside sensor provides an object presence signal. The temporal advancing of the object with respect to the signal of entry within the field of view of the laser reader is obtained from a further signal provided by an encoder associated with the conveyor belt. The position of the object along the advancing direction is obtained from these data. The association of the code with the respective object is made on the basis of the comparison between the positions of the optical code and of the objects along said advancing direction.
The Applicant has noted that the problem of the proper association code—object occurs in a particularly critical manner in systems using digital cameras. Indeed, digital cameras have a two-dimensional field of view that is typically much wider than that of laser readers. Accordingly, a condition wherein there are multiple objects and multiple optical codes within an image acquired by a digital camera may frequently occur.
The Applicant has noted that in systems that use digital cameras situations may occur in which, in order to carry out the proper association between optical code and respective object, it is necessary to have an information about the distance at which the objects are with respect to the cameras.
In order to better understand this aspect, let us consider for example the situation shown in the annexed FIGS. 1A and 1B.
Such figures schematize a case wherein a camera T detects the presence of two objects K and K−1 within the field of view V thereof and reads an optical code C at the inlet end of the field of view V with reference to the advancing direction A of objects K and K−1. FIG. 1A shows a possible situation wherein object K has a greater height than that of object K−1 and such as to at least partly hide object K−1 right at the inlet end of the field of view V. FIG. 1B, on the other hand, shows a possible situation wherein object K has such a height as not to hide object K−1. The comparison between the two figures shows that it is necessary to have an information about the height of objects K and K−1 for determining which one of the two objects K and K−1 should be associated with the optical code C. Indeed, in the case of FIG. 1A, the optical code C should be associated with object K since even if object K−1 had an optical code at the inlet end of the field of view V, it would not be visible by camera T as it would be hidden by object K. On the contrary, in the case of FIG. 1B, the optical code C shall be associated with object K−1.
The Applicant has found that a very simple and effective way for implementing a technique for measuring the distance in a system that uses digital cameras is to use a height detector (such as for example the one used in EP 0 851 376 and EP 1 363 228), comprising for example a photocell barrier.
An exemplary embodiment of such a system is shown in FIG. 2. This system, globally indicated with reference numeral 10, comprises a conveyor belt 1 which moves a plurality of objects 3 along a direction A with respect to a camera 2, each object 3 being provided with an optical code (non visible). Camera 2 frames a field of view 4 shaped as a pyramid. A processing unit 5, which is connected to the camera 2, is capable of decoding the optical codes associated with objects 3 when these are within the field of view 4. The processing unit 5 further receives a signal of arrival of objects 3 and the respective height from a presence/height sensor 6 arranged upstream of the camera 2 with reference to the feeding direction A of objects 3. Based also on the knowledge of the feeding speed, which is substantially deemed as being constant or measured in real time by an encoder 7 associated with the conveyor belt 1, the processing unit 5 is capable of synchronising the image acquisition by camera 2 with the instant at which each detected object 3 travels through the field of view 4. It is therefore possible to associate an optical code having a predetermined position along the feeding direction A at a certain instant with an object 3 having the same position at the same instant, such association being made on the basis of the height signal provided by the presence/height sensor 6.
Throughout the present description and following claims, the term “height” is used to refer to the distance from the camera through which the presence of objects and/or coded information is detected. The term “height” shall therefore comprise both a distance along a vertical direction (or along a direction having at least one component in the vertical direction) when said camera is arranged above the conveyor belt on which the objects are arranged so that the vertical projection thereof falls right onto the conveyor belt, and a distance along a horizontal direction (or a direction having at least one component in the horizontal direction) when said camera is arranged on the side of, in front of or behind the conveyor belt or above the latter but laterally displaced, forward or backward, so that the vertical projection thereof does not fall onto the conveyor belt. For this reason, the term “height” and the more generic term “distance” shall be used without distinction.
The information about object presence and object height may be provided by specific sensors (respectively presence sensor and height sensor) or by the height sensor only, which is in fact capable of also acting as a presence sensor.
In an alternative embodiment to that described above, the distance information is provided by a distance measuring device integrated with the camera and connected thereto at a geometrically defined position. In this case it is possible to use, for example, a laser reader of the type described in EP 0 851 376 and EP 1 363 228. As an alternative, a laser pointer may be used, the laser pointer being of the static beam type capable of measuring the flight time or the phase shift between the emitted and the reflected laser signal, such as for example the laser distance measuring device S80 marketed by Datalogic Automation.
According to a further alternative embodiment, the distance information is provided by a light pattern projector, for example a laser projector capable of producing a structured light beam, for example a pair of converging or diverging light figures or lines. The object distance may be obtained from the distance between said two lines or figures, said distance being measured in pixels on the image acquired by the camera, and by using a suitable look-up table (obtained empirically or starting from a formula that corresponds to the implementation of the geometrical model of the deformation to which the light beam is subjected), stored into the processing unit. This operation may be carried out at the same time as the image acquisition by the camera.
The projector may also be a single one for multiple cameras, provided that the position and the shape of the pattern projected at the various distances are known for the field of view of each camera, for example by using said look-up table.
A projector of the type described above may also allow the detection of information about shape, position and footprint of the objects travelling through the field of view directly onto the image acquired, by analysing the deformation of a dedicated reference reticule.
A further alternative embodiment provides for the use of a stereo camera, that is a particular type of camera provided with two or more lenses and a separate image sensor for each lens. Such camera is capable of simulating the human binocular vision and thus capturing three-dimensional images (this process is known as stereo photography). Stereo cameras may actually be used for producing stereoscopic views or three-dimensional images for movies or for producing images containing the object distance information (range imaging).
The Applicant has noted that while all the solutions described above are suitable for allowing a proper code-object association, they provide for the distance information to be obtained through the use of an additional device to be associated with the camera, or to be mounted within the camera. This clearly involves an increase in costs and a complication of the installation operations.
The Applicant has therefore considered the problem of finding further solutions adapted to implement, in systems that use digital cameras, a distance measurement technique that would not require the use of additional devices, so as to carry out a proper and effective code-object association without an increase in costs and/or installation burdens.
The Applicant has found that a solution to the above problem could be provided by the use of a TOF camera (where TOF is the acronym of “time-of-flight”) having such an optical resolution as to allow a reliable reading of the optical codes. In fact, TOF cameras are capable of providing an object distance information on the basis of the time elapsed between the emission of a light pulse and the reception of the signal backscattered by the object.
However, the Applicant has noted that to date, TOF cameras do not allow a reliable reading of optical codes due to the reduced optical resolution thereof. Such solution will therefore be able to be actuated only when TOF cameras with optical resolutions sufficient for the purpose will be available.
Another solution found by the Applicant provides for the application of a logo of known dimensions on a conveyor belt and the application of an identical logo on the objects. From the comparison of the dimensions of the two logos in the images acquired by the camera with the actual dimensions of said logos it is possible to deduce the distance between object and camera, without in this case requiring the magnification ratio of the camera to be known.
The Applicant wanted to find a further way for solving the problems related to the implementation, in systems that use digital cameras, of a distance measurement technique which should not require the use of additional devices, and found, as a further valid solution, the invention described hereinafter.
Such invention relates to a method for the optical identification of objects in motion, comprising the following steps:                acquiring, by at least one camera having, at least during such an acquisition, a respective predetermined magnification ratio, at least one image of a predetermined detection area, said at least one image containing at least one portion of at least one object travelling through the detection area along a predetermined advancing direction;        determining the position of said at least one portion of said at least one object along the predetermined advancing direction with respect to a predetermined reference system;        detecting, in said at least one image or in at least one subsequent image containing said at least one portion of at least one object, at least one coded information travelling through the detection area along the predetermined advancing direction, said detection being carried out, respectively, by said at least one camera or by a distinct camera having a predetermined relative position with respect to said at least one camera and, at least during the acquisition of said at least one image or of said at least one subsequent image, a respective predetermined magnification ratio;        reading said at least one coded information;        determining the position of at least one portion of said at least one coded information along the predetermined advancing direction within the image;        detecting, by said at least one camera or said at least one distinct camera, at least one reference physical dimension belonging to at least one surface portion of said at least one object or of said at least one coded information;        determining the distance of said at least one surface portion from said at least one camera and, if provided, from said at least one distinct camera on the basis of said at least one reference physical dimension and of the magnification ratio of the camera by which said at least one reference physical dimension has been detected;        determining the position of said at least one portion of coded information along the predetermined advancing direction with respect to the predetermined reference system on the basis of said distance and of the position of said at least one portion of said at least one coded information along the predetermined advancing direction within the image;        associating said at least one coded information with a respective object travelling through the detection area when said at least one coded information is detected, said association being made on the basis of said position of said at least one portion of said at least one object with respect to the predetermined reference system and of said position of said at least one portion of said at least one coded information with respect to the predetermined reference system.        
Advantageously, the Applicant has found that the method of the present invention allows carrying out, in a system for the optical identification of objects in motion through digital cameras, the correct code-object association on the basis of a distance reading that only results from the analysis of the image acquired by the camera, that is without the need of using additional devices other than a conventional camera.
In particular, referring for simplicity of explanation to the case in which the optical identification system comprises a single camera, the Applicant has noted that, one reference physical dimension being known or determined (as described hereinafter) on a surface of the object, or the coded information, and the magnification ratio of the imaging sensor mounted in the camera being known, it is possible to determine the distance of said surface from said camera. By using this distance information in combination with the one relating to the position of the coded information along the advancing direction of the objects within the image acquired by said camera, it is possible to determine the position of the coded information along the advancing direction with respect to a fixed reference system. It is therefore possible to proceed with assigning said coded information to the object that, at the time of the acquisition of the image containing the coded information by the camera, is in a corresponding position.
If the optical identification system comprises more than one camera, once the relative position of these cameras in the fixed reference system and the respective magnification ratios are known, it is possible to determine the distance of the object from all the above cameras through the measurement technique described above.
According to the present invention, the image by which the presence of the coded information is detected may also differ from the image by which the object travelling is detected. Moreover, the camera by which the image containing the coded information is acquired may also be different from the one by which the image containing the travelling object is acquired.
If the detection of the travelling object and of the coded information is carried out through distinct cameras, all these cameras have a respective magnification ratio that may be constant or variable over the time, in this latter case it being sufficient to know the extent of said magnification ratio at the time of the acquisition of the respective image. It is therefore possible to determine the position of what framed by the camera with respect to said fixed reference system.
If the magnification ratio of the camera(s) is variable, as in the case of an autofocus/zoom system, the extent of such magnification ratio is determined by a proper processing unit of the identification system discussed herein (such processing unit being incorporated or not within the camera but in any case associated with the latter) on the basis of the focusing position of the autofocus/zoom system at the time of the acquisition.
According to the present invention, if the objects that travel through the detection area are arranged so as to be spaced from each other along said advancing direction, said reference physical dimension may be defined without distinction on the object or on the coded information. In this case, therefore, the determination of the object distance from the camera does not necessarily require the previous detection of a coded information.
On the other hand, if the objects are at least partly arranged side by side with reference to said advancing direction, said reference physical dimension is defined on the coded information.
The detection of said reference physical dimension may also take place before the object or coded information enters the detection area framed by the camera(s) of the identification system discussed herein.
In a first preferred embodiment of the method of the present invention, which may be carried out if the objects that travel through the detection area are arranged so as to be spaced from each other with reference to said advancing direction, the step of detecting said at least one reference physical dimension comprises the steps of:                determining the displacement of said at least one object along said advancing direction;        determining the maximum size of said at least one object along said advancing direction on the basis of said displacement.        
In this case, therefore, the reference physical dimension is directly taken on the object and advantageously corresponds exactly to the maximum size of the object along the advancing direction.
If the advancing speed of the object is constant, said displacement may be determined using a presence sensor (for example a photocell). In that case, such displacement corresponds to the time interval between the moment at which the sensor indicates the beginning of the object entry within an observation area and the moment at which the presence sensor indicates the end of the object entry within said observation area. Such observation area may precede, coincide or be at least partly overlapped to the detection area framed by the camera(s) of the optical identification system discussed herein.
If the advancing speed of the object is not constant, said displacement may be determined using, besides a presence sensor, an encoder provided with a proper incremental counter. In that case, the number of unit steps of the encoder is counted by the incremental counter starting from the moment at which the presence sensor indicates the passage of the object. Each counter increase in fact corresponds to a physical displacement of the object along the advancing direction.
Preferably, the step of determining the maximum size of said at least one object along said advancing direction comprises the following steps:                determining the position of a first significant contrast variation along said advancing direction within said at least one image;        determining the number of pixels of said at least one image of said at least one camera occupied by said at least one object on the basis of the position of said first significant contrast variation.        
Advantageously, the position of said first significant contrast variation may be determined on the basis of the object entry signal provided by said presence sensor, optionally combined with the signal provided by said encoder (if the object advancing speed is not constant).
In any case, once the number of pixels of the image occupied by the object has been determined, it is possible to determine, through the magnification ratio of the camera, the distance at which the object is with respect to the camera itself.
More preferably, the images are acquired so that they entirely contain said at least one object. In that case, the step of determining the maximum size of said at least one object along said advancing direction comprises the following steps:                determining the position of a first significant contrast variation and of a last significant contrast variation along said advancing direction within said at least one image;        determining the number of pixels of said at least one image occupied by said at least one object on the basis of the positions of said first significant contrast variation and said last significant contrast variation.        
In this way, the determination of the maximum size of the object along the advancing direction is simpler and more immediate to be carried out.
In a different embodiment of the method of the present invention, said at least one reference physical dimension is defined by a known dimension of a logo or different graphical element applied onto the object.
In a second preferred embodiment of the method of the present invention, which does not necessarily require the objects that travel through the detection area to be arranged so as to be spaced from each other with reference to the advancing direction, the detection of said at least one reference physical dimension comprises the steps of determining the physical dimension of said at least one coded information along at least one characteristic direction, said at least one coded information having a predetermined optical resolution.
In this case, therefore, the reference physical dimension is taken on the coded information and advantageously corresponds to the length of the coded information along at least one predetermined direction.
Throughout the present description and the following claims, the expression “predetermined optical resolution” is used to indicate an optical resolution of known extent given by a single value corresponding to an expected resolution or by a range of values within such a tolerance range as to cause a positioning error of the coded information smaller than the minimum admissible distance between two adjacent objects.
As shall appear more clearly from the following description, in preferred embodiments of the method of the present invention said coded information is an optical code having a predetermined optical resolution and/or belonging to a predetermined symbology. In that case, said characteristic direction is the direction along which the elements (or information characters) of the optical code follow one another.
Advantageously, the Applicant has observed that by previously setting the value (or range of values) of said optical resolution and the magnification ratio of the imaging sensor mounted in the camera(s) it is possible to determine the distance of the coded information from the camera(s) on the basis of the comparison between the actual known dimensions and the dimensions in the image acquired by the camera(s).
Preferably, said predetermined optical resolution, if variable, varies by ±15%, more preferably by ±5% with respect to a predetermined reference value.
Preferably, the step of determining the physical dimension of said at least one coded information comprises the following steps:                determining the position of a first significant contrast variation along said at least one characteristic direction;        determining the number of pixels of said at least one image of said at least one camera or of said at least one distinct camera occupied by said coded information starting from the position of said first significant contrast variation and on the basis of said predetermined optical resolution.        
Advantageously, by counting the number of pixels of the image occupied by said coded information it is possible to determine, through the magnification ratio of the camera, the distance at which the coded information is with respect to the camera itself.
More preferably, the images are acquired so that they entirely contain said at least one coded information. In that case, the step of determining the physical dimension of said at least one coded information preferably comprises the following steps:                determining the position of a first significant contrast variation and of a last significant contrast variation along said at least one characteristic direction;        determining the number of pixels of said at least one image of said at least one camera or of said at least one distinct camera occupied by said coded information starting from the positions of said first significant contrast variation and of said last significant contrast variation and on the basis of said predetermined optical resolution.        
In this way, the determination of the physical dimension of the coded information along the advancing direction is simpler and more immediate to be carried out.
Preferably, the step of determining the physical dimension of said at least one coded information comprises the step of measuring at least one element of said optical code along said at least one characteristic direction.
In particular, if said at least one coded information is a linear optical code (that is, a code whose elements extend along a single direction), the step of determining a physical dimension of said at least one coded information comprises the step of determining the dimension of said optical code along said single direction. On the other hand, if said at least one coded information is an optical code whose elements extend along at least two predetermined orthogonal directions (for example a two-dimensional code or other types of code), the step of determining a physical dimension of said at least one coded information comprises the step of determining the dimension of said optical code along at least one of said at least two predetermined orthogonal directions.
Advantageously, the Applicant has noted that the types and the optical resolutions of the optical codes used in the object sorting systems are known and in any case limited. Therefore, the physical dimension of the optical code along a predetermined direction (in the case of linear codes) or along two predetermined orthogonal directions (in the case of two-dimensional codes or other types of code) is substantially constant (if the number of elements of the optical code is constant) or in any case a function of said number of information characters (if such number is variable). The physical dimensions of the optical code therefore are either known in advance (if the number of elements of the optical code is constant) or they may be determined upon reading of the same optical code (if the number of elements of the optical code is variable). It is therefore possible to determine the physical dimension of the code along the advancing direction through the measurement of at least one element of such optical code along a predetermined characteristic direction.
In any case, an analysis of the type described above is especially advantageous since the physical dimensions of the optical code or part thereof do not depend on the printing quality and can therefore be previously set on the system.
If the reference physical dimension is taken on the optical code, the above-mentioned contrast variations are defined by the first and/or last element of the code along a predetermined direction or by the opposite sides of an optional frame inside of which the code elements are. If said frame exists, it may be detected and measured once the coded information has been read and the characteristic direction of the optical code has been determined (through a suitable software). The physical dimensions of the optical code may therefore be obtained from the distance between the first element and the last element of the optical code along a respective direction, or from the distance between the two quiet zones often provided before the first element and after the last element of the optical code along a predetermined direction, or also from the dimension of one element in a direction orthogonal to the development direction of the code.
Throughout the present description and the following claims, the expression “quiet zone” is used to indicate a white portion of predetermined width arranged before the beginning and after the end of the sequence of code elements and usually extended starting from such end elements up to the edge of the frame that includes said sequence of elements.
Preferably, said optical code belongs to a predetermined symbology and said at least one reference physical dimension is determined on the basis of said predetermined symbology.
The Applicant has advantageously observed that the number of characters of an optical code may be uniquely given, or reduced to a limited number of possibilities, by the particular type of symbology used. Once the symbology has been known, it is therefore possible to determine the physical dimension of the optical code.
In preferred embodiments of the present invention, the step of determining the position of the first significant contrast variation along said at least one predetermined direction is repeated more times along parallel paths. Such operation is advantageous for validating the results obtained.
Said step preferably includes, moreover, the step of travelling through said at least one predetermined direction along two opposite directions.
Preferably, the step of determining the position of said at least one portion of said at least one object comprises the steps of:                detecting the instant at which said at least one object travels through a predetermined detection position;        determining the displacement of said at least one object along the advancing direction.        
Preferably, the step of determining the displacement of said at least one object comprises the steps of:                comparing the position of said at least one portion of said at least one image in at least two acquisitions;        calculating the displacement of said at least one object on the basis of said comparison.        
The two above-mentioned acquisitions need not necessarily be consecutive.
As an alternative, a conventional encoder associated with the conveyor belt may be used, as in the system shown in FIG. 2. Of course, in those cases where the conveyor belt moves at a constant speed, it is not necessary to determine the movement of the object along the advancing direction, since such displacement is known in advance.
In a preferred embodiment thereof, the method of the present invention further comprises the step of detecting the distance of said at least one object by a specific distance measuring device distinct from the camera.
Such solution is particularly advantageous in those cases when there is uncertainty in the code-object association due to the fact that the coded information is in the proximity of the object edges and/or objects arranged very close to each other travel through the field of view of the camera (such as for example in the case shown in FIGS. 1A and 1B). In that case, the code-object association is made on the basis of the comparison between the object distance information coming from said distance measuring device and the distance information of the coded information with respect to the camera. Said distance measuring device therefore acts as an auxiliary device for reducing the uncertainty in the code-object association.
In particularly preferred embodiments thereof, the method of the present invention further comprises the following steps:                determining the position of said at least one object along two orthogonal directions within said at least one image;        determining the footprint of said at least one object in a plane with respect to said predetermined reference system on the basis of the position of said at least one object along said two orthogonal directions and of said distance.        
Even more preferably, the method of the present invention further comprises the step of determining the volume of said at least one object on the basis of said footprint and of said distance.
In this way, advantageously, all the object dimensions are determined and thus, the position of the object in the fixed reference system is uniquely determined. Accordingly, once the position of each camera with respect to the other ones and to the fixed reference system is known, also the relative position between object and any other cameras of the optical identification system is uniquely determined. Each camera may therefore communicate the information acquired to the other cameras, thus allowing the problem of the code-object association to be solved in any case, even in those cases in which this problem cannot be solved with a single camera and requires the intervention of the other cameras.
In a particularly preferred embodiment of the present invention, at least two distinct cameras are used, wherein at least one camera is positioned so as to frame said at least one object from above the object and at least another camera is positioned so as to frame said at least one object from one of the sides thereof. Preferably, at least one of said cameras is positioned on at least one side of the conveyor belt on which the objects are arranged, so as to frame at least one between the right and left side faces of the objects. However, alternative embodiments are not excluded wherein said camera or at least one further camera is positioned in front or back position with respect to said conveyor belt, so as to frame at least one between the front and back faces of the objects.
In particularly preferred embodiments of the present invention, said at least one camera and, if present, said at least one distinct camera, is arranged so that its optical axis is perpendicular to said advancing direction.
In that case, preferably, said at least one image is acquired by said at least one camera when said at least one object is at said optical axis.
Advantageously, the choice of the particular position described above for the image acquisition allows reducing the possibilities of error in the code-objects association in those situations in which the objects are arranged as shown in FIGS. 1A and 1B, thanks to the minimisation of any relative darkening effect among the objects, and to the maximisation of the probability that the framed object is the one of interest. Moreover, any possible error caused by perspective deformations is prevented.
Preferably, the method of the present invention further comprises the steps of:                determining the distance of a plurality of points of said at least one surface portion;        determining a possible rotation angle between the optical axis of said at least one camera and the surface portion on the basis of the distance of said plurality of points.        
It is therefore possible to determine the aforementioned reference physical dimension also in the presence of objects or coded information which are rotated about the optical axis of the camera and with the camera arranged at the side, front or back of the conveyor belt, in those cases wherein the coded information is on a face of the object other than the upper or lower one.
Preferably, the distance of said plurality of points is determined on the basis of the analysis of a perspective deformation of said at least one coded information.
In this way it is possible to determine the aforementioned reference physical dimension also in the presence of coded information arranged in a rotated position on one of the object faces.
More preferably, said optional angle of rotation is determined on the basis of the analysis of said footprint.