Optical codes, such as one or two-dimensional barcodes, are used in many applications, for example to identify objects, and/or their contents, being moved on a conveyor. The data acquired from these codes can be used within a tracking system to assure that, for example, proper manufacturing steps are applied to the object, or that the object is routed to a desired destination. But before these functions can be performed, it is first necessary to acquire an image from which the barcode can be identified, to identify the barcode, and to assign the located barcode to the correct object on the conveyor.
Various types of devices for reading optical codes may be used to acquire images of objects moving on a conveyor. Scanning devices, for example, may comprise an illumination beam that repeatedly sweeps across the conveyor surface and reflects back to an optical sensor. The beam may originate from a coherent light source (such as a laser or laser diode) or non-coherent light source (such as a light-emitting diode (LED)), but in any event, the optical sensor collects the light reflected from the conveyor and the objects thereon and outputs an analog waveform representative of the reflected light that may be digitized. As this data collects, and as the conveyor continues to move past the scanner, the collected data constitutes an image of the conveyor and the objects it contains. Moreover, because the light beam scans the conveyor, each bit of data from the reflected light can be associated with an angular position of the beam, and because the data corresponds to a beam, characteristics of the beam can be used to determine distance between the scanner and the reflection point. Angle and distance define a vector, and therefore each point in the resulting image can be associated with dimensional features of the conveyor and the conveyed objects. The height of each point in the image above the conveyor surface can be determined.
Optical code readers may also comprise camera devices that may in turn comprise an array of optical detectors and a uniform light source, such as an LED, used to illuminate the conveyor surface. The optical detectors may be charge-coupled devices (CCDs), complementary metal-oxide semiconductor (CMOS) devices, or other suitable devices, and they may be implemented in a one-dimensional or two-dimensional array. In operation, when light from the light source reflects from the conveyor surface or from the surface of an object carried by the conveyor, the array detects the reflected light to capture an image. Systems with one-dimensional arrays capture sequential linear cross-sections that collect to form a two-dimensional image, whereas two-dimensional arrays repeatedly capture two-dimensional images that change incrementally as the conveyor moves past the camera.
Once the image is acquired, it may be processed to determine whether any optical codes are present in the image and, if so, to locate such codes. Various techniques for locating optical codes in images are known and should be understood in this art. A first general step in known routines is to identify any portions of the image that correspond to objects upon which the presence of optical codes would be expected. Known location technique first examines the image to locate object edges or corners, and if sufficient edge or corner information exists, and passes certain criteria, that area of the image is determined to correspond to an object. Once an object portion of the image is identified, after processing to entire two-dimensional image, a routine may be implemented to locate optical codes, if any, that may be present within that image portion. In general such methods are based on preliminarily known characteristics of the codes, such as geometric configurations and the presence of consistent light/dark transition areas. Location techniques therefore tend to include methods for identifying these predictable characteristics, and for traditional one dimensional barcodes often include edge detection routines. As noted, these types of analytical techniques should be understood in this art, and the specific technique is not, in and of itself, part of the present invention.
Once the object and the code are identified in the image, it is then necessary to identify the corresponding object in the conveyor tracking system so that the code data can be associated with the object. At the point in the process described above, the reader may have acquired an image that may include an optical code, and as described in more detail below, the reader or a controlling system has information describing at least part of the optical code's position in space relative to the reader. Assume, for example, that the code's position in the reader's reference space is known. Assume, also, that the reader's position in a reference space defined with respect to the conveyor is known. Under these conditions, a processor in the reader (or another system processor to which the reader outputs the image, the image's position in the reader space, and a time reference indicating when the image was acquired) can translate the code's position in the reader's space to a position in the conveyor's space. The time reference can be any reference that normalizes events, for example with respect to a clock or to the conveyor's movement, so that such events can be compared. Such references, for example time stamps or tachometer signals that describe the conveyor's position, should be understood. If this processor, or another system processor to which the first processor outputs the optical code information, knows (a) the optical code's position in the conveyor's space, (b) the time reference corresponding to the image in which the code was acquired, (c) the positions (in the conveyor space) of objects being carried by the conveyor, and (d) time references corresponding to the objects' positions, the processor can identify the object on the conveyor to which the optical code corresponds.
A problem that can occur with such techniques, however, is that they can require significant processing time. Thus, as the desired conveyor speed increases, and therefore as the system's need to quickly process images to locate optical codes correspondingly increases, the rate at which the system can process images to locate optical codes can become a limiting factor for the overall system speed.
One way to reduce processing speed is to eliminate the need to locate the object's boundaries in the image, thus requiring the processor to search the image only for the optical code. In non-singulated systems (i.e. conveyor systems that operate under the assumption that objects on the conveyor can overlap in the conveyor's direction of travel), this generally requires that the conveyor system have sufficient dimensioning capability to determine a three dimensional profile (in conveyor space) of the objects, that the system know the position of objects (e.g. by use of a photoeye at known distances from both the dimensioning device and the reader, in conjunction with a tachometer or timer), that the system know the positions of the reader and the dimensioning device in the conveyor space, and that the reader provide sufficient information about the optical code's position that the code can be correlated to those profiles. A known system utilizes a three-dimensional dimensioner disposed at a known position in the conveyor space, along with a reader having a one-dimensional optical sensor (e.g. comprising a linear CCD array) and that is disposed so that the reader's linear field of view extends transversely to the conveyor's direction, across the conveyor, at a known distance from the dimensioner (in the conveyor's travel direction). As the dimensioner accumulates three dimensional profiles, the reader repeatedly sends its linear image data to the processor. Because of the speed at which the one-dimensional images must be acquired, the camera's depth of field is relatively short, and the system therefore utilizes an autofocusing lens, which can be controlled in response to the height data from the dimensioner. The one-dimensional images from the scanner accumulate to form a two-dimensional image, which the processor analyzes as a whole to locate optical codes. Following that, based on (a) timing, (b) the reader's and the dimensioner's known positions in the conveyor space, (c) dimensioner information, and (d) the optical code's location in the overall image, the dimensioner processor can associate the optical code from the reader with a given object detected by the dimensioner. As should be understood in the art, height detection can be particularly important where one dimensional sensors are used. Given the need to repeatedly acquire image data for accumulation, readers using one dimensional sensors use short exposure times, which in turn can result in an open diaphragm and relatively short depth of field. Thus, upstream height information can be used to determine the proper position for an autofocusing lens.
The need for three-dimensional profiling can be eliminated in singulated systems, i.e. conveyor systems that operate under the assumption that objects on the conveyor do not overlap in the conveyor's travel direction, because detection of the presence of an object on the conveyor establishes that the object is the only object on the conveyor during that time reference. For instance, assume a conveyor system has a photoeye directed across the conveyor, a tachometer that outputs pulses that correspond to the conveyor's speed and travel distance, and an optical code reader comprising a laser scanning device. The photoeye and the tachometer may output to the reader, or to another system processor that also receives the reader data. When the leading edge of an object breaks the photoeye's detection zone, the processor detects the event, creates a system record, and associates a tachometer count with a data item for the object's leading edge. That count increments with each subsequent tachometer pulse, so that as the leading edge moves with the conveyor further from the photoeye position, the record indicates, in terms of tachometer pulses, the distance between the object's leading edge and the photoeye. The processor similarly creates a data item in the record for the trailing edge, also accumulating tachometer pulses. Thus, at any time, the distances of the object's leading and trailing edges define the object's position on the conveyor with respect to the photoeye. A light curtain may still be used with one-dimensional CCD sensors, however, where needed to provide input data to adjust an auto-focusing lens for the reader.
As noted above, a reader sensor's field of view may be angled with respect to the direction between the sensor and the conveyor belt. Thus, while the system knows the distances of all leading/trailing edge pairs of objects on the conveyor from the photoeye, the system cannot be sure that an optical code detected in a given field of view corresponds to a given object within the field of view unless the objects are spaced sufficiently far apart that it is not possible for two objects to simultaneously be within the same field of view. That is, knowledge of an optical code's position in the camera's image does not allow direct correlation of the code with an object within the field of view if there are more than one object in the field of view that could be in the same position as the code. Thus, in singulated systems it is known to maintain at least a minimum separation between object, based on an assumption of a maximum height of objects on the conveyor.
WO 2012/117283A1, the entire disclosure of which is incorporated by reference herein, discloses a system in which a reference dimension of the object (e.g. its length) is known or in which indicia of a predetermined size are placed on the objects, so that detection of the reference dimension in the image can be compared to the known actual reference dimension to thereby infer the object's distance from the camera. This distance can be translated to conveyor system space, and thereby correlated with a particular object, without need to directly measure the object's height.
U.S. application Ser. No. 13/872,031, the entirety of which is incorporated by reference herein for all purposes, discloses the use of a dimensioner in conjunction with scanner and imager data capture devices. It is known to detect information about the distance between an object and a two-dimensional imager using distance-measuring devices integrated or associated with the imager. It was also known that a light pattern projector may be used to project a pair of converging or diverging light figures or lines, with the object distance being obtained by measuring the distance in pixels between the lines or figures in the image acquired by the imager. Further, it was known to use a stereo camera, i.e., a camera having two or more lenses and a separate image sensor for each lens, to obtain a three-dimensional image.