In the field of machine vision, extensive work has been carried out in researching the interpretation of images acquired from an image source such as a video or other electronic camera. In many hitherto known machine vision systems the operation of the system has concentrated on the processing of images acquired from a source without any interaction between the processor and the source. Thus, the system has operated in an essentially passive manner with the source parameters such as camera aperture being adjusted independently from the processing on the basis of say the average light intensity of a scene.
In many circumstances adequate information can be obtained from a passive source by moving the viewing position and acquiring further images representing different views of a scene. Indeed, multiple views of a scene are always required if three-dimensional information about the scene is to be obtained. However, in adopting an essentially passive approach to image acquisition, information about a scene may be lost in an acquired image, thereby creating many problems which must be resolved by image processing in order to obtain valid information from the image.
A passive source will not necessarily provide optimal images for interpretation by the system. This is because for example scene illumination varies widely between that of a sunny day when the scene will contain high contrasts and that of a moonlit night. Typically, video image sources are arranged to respond to say the average illumination of a scene or to the illumination at a point in the scene and will thus supply very different images representing the same scene under these two extremes of illumination. Similarly, poor images may be acquired where objects in a scene have widely varying reflectance. Some objects may have highly specular aspects (i.e. reflections of the light source illuminating the object) and others may be highly diffused in nature (i.e. substantially no specular reflections). Whilst most scenes and most objects fall between these extremes there may still be problems with for example specular reflections being interpreted as edges where there are in fact none and edges or other features not being detected when they do in fact exist.
In order to overcome these difficulties image processing software has been developed which makes various assumptions about a scene represented by an acquired image. One reason why this approach has been adopted is that there is a belief commonly held by those in the art of machine vision that any viewing mechanism capable of being implemented can be simulated entirely by software. Whilst it is indeed possible to implement many aspects of observation by way of software applied to images acquired from an essentially passive source, this approach is limited in that it is difficult to remove false information such as specular edges, or to replace missing information such as undetected edges in such an acquired image. This problem stems from the fact that once a poor image has been acquired it is difficult to transform it into a good image from which scene information can be extracted.