Pixel level image processing systems generally must rely on noisy sensor data to extract spatial features of detected images. Unfortunately, the extraction of spatial features usually requires operations that tend to amplify high frequency noise. A commonly used technique for minimizing noise is the process of image smoothing performed before any spatial operations. Image smoothing, however, can cause the object characteristics of the image to spread beyond the actual object boundaries, thus blurring or distorting the image. As a result, image smoothing often yields poor performance in recognition and classification of detected images.
One method of improving the results of image smoothing is to identify image discontinuities detected by the sensor and use the discontinuities as boundaries for the image smoothing operations. This approach has been used in the prior art for computations in early vision modules. Image edges have been used, for example, to establish boundaries for depth or motion computations.
It is believed that human vision relies on neural parallel processing of generally noisy and ambiguous information from each of a multiplicity of sensors. Normal human visual perception, however, is neither noisy nor ambiguous. This result is believed to be achieved by a neural process of fusing the different descriptions, such as edge and depth information, from one or more sensors. Emulation of this process is being investigated for improving image recognition performance of electronic image processing systems.