Current techniques for determining such a set of matched attributes such as e.g. matched pixels or matched objects make use of 2-dimensional, hereafter abbreviated as 2D, image processing techniques in order to find such matched attributes between two or more images. This involves performing searches in the 2D domain to find corresponding pixels or pixel groups in these images. Known techniques are mostly based on block matching techniques involving placing a fictitious block around a pixel in one of the images, and searching for blocks in the other images which have the best correspondence with the first one, using metrics that calculate a correspondence or matching score based on these blocks around the pixels in the images. These solutions are computationally intensive, and are not robust when the images are e.g. generated by two cameras having a large baseline, meaning that there is a large distance between these cameras. Such images or views will show a significant difference. Most of the known methods furthermore require a certain overlap of the objects in the images between which correspondence or matching is searched for. In addition, when correspondences are searched at an object level rather than at the pixel level itself, the state of the art methods fail in case the viewpoints are so different that the pixel content of these objects is totally different, despite the fact that the object itself is the same in both images. This can for instance be the case when one image displays e.g. a human head showing details of the face of a person as the image is taken from a camera in front of this person, while another image displays the same human head, but with details of the back of the head such as the hair, as this image was taken from a camera at the back of this same person.
It is thus an object of the present invention to set out an improved method for determining matched attributes between a plurality of images, which can solve the above mentioned prior art problems.