In mathematical terms, an object in an image may be defined as a homogenous agglomerate of pixels within which any variation among pixels in terms of both intensity and colour is attributable to noise only. An edge of an object is defined as a zone of contact separating that object from a different object. In digital images, a part of an edge may be a zone comprising several pixels. For the zone, the more pixels that it covers, the corresponding edge defined in the zone becomes less well defined.
Manipulation of objects in a digitized image is a tool commonly used in present-day image processing systems. In such systems, identification of an accurate edge of an object in an image may be necessary, e.g. object selection operations.
In current edge-detection algorithms, gradient values for the pixels of objects are a key calculation. Current edge-detection processes may produce a sub-image of an image, where edges in the sub-image are calculated with thick “lines” showing a central higher gradient core surrounded on both sides by lower gradient pixels. However, a well-defined central high-gradient element is found only for sharper, more distinct, edges. As such, for the purpose of reducing an edge zone to a line (where a line in a raster configuration is represented by a continuous, uninterrupted sequence of single pixels), comparing a gradient value of pixels in an edge zone against a threshold may not provide satisfactory results.
There is a need for a system and method for detecting edges of objects that improves upon the prior art.