Edge detection has remained a fundamental task in computer vision since the early 1970's. The detection of edges is a critical preprocessing step for a variety of tasks, including object recognition, segmentation, and active contours. Traditional approaches to edge detection use a variety of methods for computing color gradient magnitudes followed by non-maximal suppression. Unfortunately, many visually salient edges do not correspond to color gradients, such as texture edges and illusory contours. State-of-the-art approaches to edge detection use a variety of features as input, including brightness, color and texture gradients computed over multiple scales. For top accuracy, globalization based on spectral clustering may also be performed.
Since visually salient edges correspond to a variety of visual phenomena, finding a unified approach to edge detection is difficult. Motivated by this observation, the use of learning techniques for edge detection has been explored through a number of approaches. Each of these approaches takes an image patch and computes the likelihood that only the center pixel contains an edge. One independent edge prediction for each pixel may then be combined with the one independent edge prediction for each other pixel using global reasoning.