Applications involving multiple views of the same scene, such as stereo imaging in the meaning of stereoscopic or 3-D imaging, or applications involving multiple versions of originally the same image, such as two different scans of the same film negative, suffer from geometric differences and color differences between corresponding images. Stereoscopic imaging or real 3D requires a minimum of two pictures simulating our two eyes, the left image and the right image. Geometric differences can be caused by parallax in case of stereo images and by cropping, zoom, rotation or other geometric transforms in case of film scans. Color differences are caused, for example, by non-calibrated cameras, non-calibrated film scanners, automatic exposure settings, automatic white balancing or even physical light effects in the scene. Color difference compensation is often the first step in image or video signal processing of multiple view or stereoscopic pictures, as other steps, such as disparity estimation or data compression, benefit from reduced color differences. One approach for the compensation of color differences between images is color mapping, also called tone mapping, which is applied for color transformation. Color mapping has the task of remapping the color coordinates of an image such that they are suitable for further color signal processing, color signal transmission, or color reproduction. Color mapping typically starts with finding Geometric Feature Correspondences [1], in the following abbreviated GFCs, using methods such as Scale Invariant Feature Transformation [2], in the following abbreviated SIFT, or simply using a normalized cross correlation [3]. GFCs are a list of pairs of corresponding feature points in multiple views, for example the left image and the right image. GFCs allow coping with the geometric differences between left and right images. As GFC computation is not free from errors, some of the corresponding feature points are wrong and are so-called outliers. Wrong corresponding feature points are not positioned on the same semantic image detail in the left image and in the right image. In a subsequent step, those outliers are usually removed from the GFCs. Color coordinates, such as e.g. R, G, and B for red, green and blue, are then retrieved from the two images using the feature correspondences. In the following, these retrieved colors will be called Color Correspondences, abbreviated by CCs. Finally, the CCs are used to fit a color mapping model. Said outlier removal step is significant because, for example, if a GFC lies in a highly textured region, a small error in spatial position of the GFC can generate a large error in the CC. Therefore, an improved outlier detection is desirable.