Various image processing techniques exist which are used automatically to separate foreground objects from background objects in an image. Generally, this involves generating foreground transparency masks known as alpha mattes.
An alpha matte is an image which is the same size as the input image. Each pixel in the alpha matte has an alpha value which ranges from 0 to 1 representing the transparency of the foreground object within the pixel region, where “0” represents a pixel which is entirely part of the background and “1” represents pixels which are entirely part of the foreground. A particular type of an alpha matte is a binary alpha matte which has alpha values of only 0 or 1 which distinguishes foreground pixels from background pixels, but does not allow for partial opacity in pixels that overlap both foreground and background regions of the scene.
Existing techniques for generating alpha mattes are designed to work with foreground objects that are physically separate from the background. The foreground object must be positioned sufficiently far in front of the background that background and foreground lighting can be treated as independent.
Japanese patent application publication nos. 04037383, 11073491 and European patent application publication no. 1909493 describe conventional systems whereby a background planar object and a foreground object (of any shape), which are located at different distances from an imaging device, are discriminated from each other by illuminating the scene with different radiation frequencies. In all the systems described in the aforementioned documents, the foreground object must be positioned sufficiently far in front of the background so that background and foreground lighting can be treated as independent, thereby allowing the foreground portions to be distinguished from the background portions. Such systems do not permit foreground objects located near to and overlapping each other in the foreground part of the scene causing occlusion, to be readily discriminated from each other.
Outfit visualization tools are used by online clothing retailers to help shoppers see how specified combinations of garments might look on a real human body. Outfit visualization typically works by compositing one or more garment sprites onto an underlying image of a human body using alpha blending. A body image might be generated, for example, by projecting a 3D human body model into the desired viewpoint. Then suitable garment sprites could be obtained by (i) photographing the garments on a mannequin that has the same shape as the body model, and (ii) processing the resulting images to compute alpha mattes. Key challenges are:                1. to compute alpha mattes from garment photographs with a minimum of (costly) user intervention; and        2. to compensate for any misalignment between the mannequin and the body model. Alignment problems can arise because of slight variations in mannequin pose introduced by assembly and disassembly of the mannequin during dressing, variation in camera position, and because the mannequin can flex with time (under its own weight or that of heavy garments).        
None of the existing techniques of generating alpha mattes is directly applicable to the problem of segmenting garment sprites from images of garments dressed on a mannequin.
In the context of garment imaging, each pixel in an image must be associated with an opacity (or alpha) value in the range 0 to 1 due to the translucence of certain garments when carried on a mannequin. Hence, it is desirable to obtain opacity information during alpha blending to give improved realism when compositing semi-transparent garments, especially in the vicinity of segmentation boundaries.
In the context of garment photography, the mannequin itself is deemed to be an unwanted background portion of the scene for the purposes of generating garment alpha mattes. However, the mannequin is not physically separate from the garment and, from the optical perspective of the imaging device, it is therefore part of the foreground when the garment is being imaged.
In the explanation which follows, all references to the “background” are references to portions of a scene which are located at greater distances from an imaging device than foreground portions, so that background and foreground lighting can be treated as independent. This imparts completely different considerations to those concerning the discrimination of wanted and unwanted portions of the scene foreground, which are an aspect of the present application. The most popular alpha matting strategies discussed below.
Constant Colour Matting                Here the foreground object is photographed against a backdrop with a known, constant colour—ideally one that is in some sense distinct from those in the foreground object. Then the alpha value at each pixel is computed by measuring the amount of background colour that shows through the foreground object (this is also known as blue screen, green screen, and chromakey matting). Well-known limitations of this approach include:                    1. the backdrop colour can introduce an unnatural colour cast to the foreground object (colour spill); and            2. if the foreground colours are insufficiently distinct from the background colour (e.g. because of colour spill or shadows cast on the background by the foreground object) then it may be difficult to segment the foreground sprite cleanly.                        In principle, it would be possible to extend this idea to the garment matting problem by using a mannequin with the same colour as the backdrop. However the lack of physical separation between the mannequin and the garment exacerbates the problems of colour spill and background shadow to such an extent that it is often impossible to obtain an alpha matte that can fully separate the foreground and background layers. Furthermore, garments come in a variety of colours, which complicates the task of choosing a single background colour that is sufficiently different to that of the mannequin for segmentation to be successful.        
Multi-Film Matting                Here, a foreground object is painted with a special paint, typically one that is transparent to visible light but which fluoresces strongly at a particular wavelength in response to UV illumination. Then the scene is photographed with a camera sensitive to the wavelength at which the paint fluoresces as well as another camera that is sensitive to visible light. The image obtained by the first camera can be used directly as an alpha matte. This technique could not be applied to garment matting since it is not possible or desirable to dye garments with an appropriate fluorescent dye.        
Triangulation Matting.                Here, the foreground object is photographed two or more times against backdrops of different, known colours. Then the alpha matte is determined by measuring the colour change at each pixel (more opaque foreground pixels exhibit less colour change).        
Generating a three-dimensional body model of a subject from a limited set of body measurements taken from the subject, or from a two-dimensional image of the subject is also key to providing an accurate fit of a garment to a subject. This also permits an accurate visualisation of the garment on an image of the subject. There are currently no accurate processes or systems for achieving accurate generation of a body shape and its corresponding surface geometry.