One fundamental task in image understanding is to computationally determine the likelihood that regions (a connected group of pixel positions) or pixels of an image represent a specific material, also referred to as region classification or region labeling. This can be quite difficult, especially when the computer lacks the contextual information about the image. Current techniques use features derived from the image to generate belief maps (e.g. above cited, commonly-assigned U.S. patent application Ser. No. 10/747,597 in the case of sky detection). For example, a white region in an image may or may not represent snow. Such techniques often produce incorrect results in that it is difficult to produce features that can be used for robust classification, primarily because many materials share similar color and texture characteristics (e.g., snow and cloud).
In a related task, sometimes it is necessary to classify an entire image into one of the known categories. For example, in many cases it is useful to classify a digital image as either an outdoor image or an indoor image. Performing this task by using the image data alone is also quite difficult and prone to errors.
In U.S. Pat. No. 6,504,571, images have associated geographic capture information. Queries are converted to latitude and longitude queries. For example, a search for images captured on beaches causes the system to display a list of beaches. Places such as stadiums, parks or lakes are processed in a similar manner. While this method does provide utility for finding images captured at specific places, it does not help determine the content, i.e., region labeling, within the images themselves. For example, the system would not distinguish an image captured inside a house on a beach from an image, captured at the same place, of the ocean. Furthermore, other systems describing location information associated with images such as U.S. Pat. Nos. 5,506,644 and 5,247,356 similarly do not help in determining the content of the image or the materials represented within the image. For example, the system would not locate houses or sand within the image.
There is therefore a need to design a method for classifying regions of specific materials and objects in a digital image while exploiting the geographic location information captured along with the image.