Some image processing applications generate edges that define objects in an image. An edge is a boundary between two dissimilar regions in an image. For example, in some cases, replacing the pixels of an image with the edges of objects in the image may reduce the amount of data needed to represent the objects in the image. The replacement of the pixels with edges may be useful in applications such as computer vision.
Conventional approaches to detecting edges of objects in images include determining first derivatives in both the horizontal and vertical directions and second derivatives of the brightness of the image at each pixel. The second derivatives may indicate a location of an edge while the first derivatives may indicate the direction of an edge. In some implementations, the image may be smoothed using a smoothing filter to reduce artifacts in the edges due to noise in the image.
The above-described conventional approaches to detecting edges of objects in images are inaccurate when there are significant amounts of noise in an image. For example, in the presence of noise, the conventional approaches may produce edges that have poor connectivity and accuracy and/or false edges, i.e. edges that are not really part of an object of an image.