Many applications employing image processing typically rely on the evaluation of information about the content of a digital image or video. In order to provide these information, many techniques for automatically extracting information about the content have been developed. A common problem is to find primitive shapes, for example lines, circles, ellipses or curves in an image frame. For example, many Advanced Driver Assistance Systems (ADAS), i.e. systems for supporting the driver of a vehicle for increased driving safety, for example systems for lane departure warning, collision warning, automatic parking or traffic sign recognition, require reliable and fast detection of shapes, such as lines or circles. The detection of eyes is another example application for shape detection in digital image frames, for example in portraits or for systems for detection of vehicle driver drowsiness.
Due to the amount of image data and the unknown variety of information contained, the data may be difficult to be processed. One approach of reduction is feature extraction, i.e. a transformation of the image data into a reduced representation set of features. The features extracted should contain the relevant information from the input data to perform the desired task, for example to detect a shape such as a line or a circle based on this reduced representation instead of the original data.
An example of a feature extraction technique is the Hough transform. The algorithm for computing the Hough transform comprises a voting procedure carried out in a parameter space, from which object candidates are obtained as local maxima.
The Hough transform for circles (HTFC) is a method for detecting circle shapes in a digital image and may therefore for example be applied when detecting eyes in portrait images or traffic signs in video-based driver assistance systems.
For HTFC, a pre-processing step is required for a digital gray-value input image I(x,y), wherein the gradient of the image intensity function is calculated for each pixel (picture element) and the image I(x,y) is replaced by a directional gradient image containing gradient vectors g(x,y). Gradients may be calculated using two-dimensional gradient filter operators.
The first step of the HTFC itself is the generation of 2-dimensional histograms of circle-center probability scores. Typically, one histogram has to be generated for every radius of interest.
For a single radius r, the following applies: As the gradient g points towards the center of the circle (or to the opposite direction), at every pixel position x,y the histogram entries at (x,y)+r·g/|g| and (x,y)−r·g/|g| are increased, i.e. for HTFC every gradient direction pixel value votes for two circle centers. If the values belong to a circle, scores at the center may accumulate and the maxima in the histogram designate circle centers. In a second step, this histogram is searched for the maximum entries.
The implementation of the Hough transform and especially the HTFC typically requires a lot of buffer-space, since each calculated two-dimensional histogram usually has the same dimensions as the image. For HTFC, one 2D histogram per circle diameter is required. The calculation of the HTFC for an image frame may require considerable processing time, since the sequential read-modify-write accesses may limit the speed of operation.
Usually, the processing of a complete image frame is performed in two passes. The effect of a single input pixel to a 2r×2r region of the histogram is rendered. The histogram entries reach their final value when all relevant pixels have been processed. After the histogram calculation is finished the maxima in the histogram have to be searched in a second pass.