Images are commonly thought of as two-dimensional representations of three-dimensional objects. Images may be defined by a two-dimensional grid, where each element of the grid is referred to as a pixel. Associated with each pixel is an intensity. The magnitude of pixel intensity is often expressed in terms of luminance. In a digital image, each pixel may be assigned a luminance value. In an 8-bit image that value would be 0 to 255. Dark pixels have a low intensity, while light pixels have a high intensity. FIG. 1a illustrates an image where the intensity of the image increases along the horizontal axis (from left to right). FIG. 1b is a graph of the intensity distribution of the image along the x-axis. This graph illustrates a change from low (black) to high (white) intensity at approximately pixel 24 (i.e., column 24). Although not shown, an intensity distribution along the y-axis would be a constant.
Edge detection is an important part of image processing. Edge detection involves identifying the lines or boundaries of objects in an image. Thus, referring back to FIG. 1a, image processing devices use edge detection to identify a section along the horizontal line where a low intensity, dark region ends and a high intensity, white region begins.
Presently, one method for identifying edges is to utilize the first order derivative (i.e., the gradient) of the intensity distribution function of an image. Thus, referring to FIG. 2, the edge is at the pixels where there is a significant change (i.e., exceeding a threshold value) in the intensity from low to high. These edges are identified by searching the first order derivative expression for local directional maxima. Edge information obtained from the first order derivative of the intensity distribution function includes both a location and an orientation, also referred to as the angle of the edge.
Relying on the first order derivative to identify edges has several problems. In particular, utilizing the first order derivative tends to produce thick edges. For many reasons, some of which are beyond the scope of this disclosure, thick edges increase the complexity in downstream software which constructs contours (i.e., edge pixels assembled to define a line or curve), and are undesirable when attempting to identify objects in an image. For example, it is extremely difficult and some times impossible to find the center of a thick edge.
One solution is to apply edge-thinning post-processing to the first-order derivative method described above. However, post-processing requires extra hardware, and the images cannot be processed in “real-time,” for example, the pixels of the image cannot be processed as the image is captured, for example, by a camera. To use edge-thinning, the entire edge map must be stored and then processed using the edge-thinning techniques.
Another option is to use the second order derivative of the intensity distribution function of the image to identify an edge. The edges are identified by solving the second order derivative expression for the zero-crossing values. Referring to FIG. 3, the zero-crossings demarcate the edges in the image illustrated in FIG. 1 using the second order derivative of the intensity distribution function. One benefit of this method is that using the second order derivative produces a thin edge, in contrast to the use of the first order derivative function. For example, FIG. 2 illustrates an edge stretching approximately from pixels 21 to 29 in the horizontal direction.
One problem with the second order derivative is that the zero-crossings are directionless. In other words, unlike the first order derivative function there is no inherent edge orientation or angle information in the zero-crossing values. One solution is to combine the thin edges identified by the second order derivative with the direction information identified by the first order derivative. This is, of course, not a desirable solution because it requires significant extra processing. And since edge detection is often a pre-processing step when processing an image, additional processing is a drag on system resources. Another option is to further process the second order derivative using complex partial derivatives (e.g., a third order derivative). Again, this is an expensive post-processing step that reduces the practical applications of such an edge detection method. Furthermore, it requires a second post-processing step to finally determine edge orientation based on the direction of intensity change that is the result of taking the partial derivatives.
Accordingly, there is a need for an edge detection system that determines angle and orientation information from a second order derivative of an intensity distribution function without expensive post-processing.