The use of masks is prevalent in image manipulation. For a digital image, a user defines a mask and the application splits the image into layers, one layer being the area within the mask and one layer being the area outside of the mask. The remaining portion of the layers may then be filled in with a solid color, such as black, for easy manipulation of the remaining image in the layer by the user. Traditionally, mask generation of an image is a tedious process of manually selecting portions of an image to be part of a mask. For example, a user may use a lasso tool in image software to define a portion of an image to be part of a mask. The lasso tool, though, requires the user to be precise in drawing edges of where the mask should exist. Tools exist within the applications to select areas with similar color. For example, such tools allow a user to select a pixel, wherein the tool expands the selection to neighboring pixels that are within a defined color of the selected pixel.
One problem with these tools is that they are unable to handle fragmented regions of an image. For example, in attempting to create a mask to include the leaves of a tree in an image, the background of the image may show between the leaves such that the user needs to select each leave in the image in order to create the mask. Another problem is that the tools do not allow easy creation of a mask wherein many different colors exist. For example, a Leopard has many spots and face of different color than the rest of the body. Therefore, a user needs to select all of the spots, rest of the body, and any portion missed to select the entire Leopard. Another problem is that the masks created by present tools are typically with hard edges with little allowance for blending. Another problem is that present tools grow user selections in order to create a region, thus, whether an area of an image is included in the region is directly related to the proximity to the user selections.