An increasing amount of data regarding the physical world is being captured by automated and semi-automated mechanisms. For example, specially designed motor vehicles are utilized to drive along roads and capture data, such as in the form of images, of the surrounding physical world. As another example, aircraft are, likewise, utilized to capture data of the physical world they fly over. Because of the sheer quantity of data that can be captured, automated data analysis can be utilized to extract useful information from such data. One form of such useful information can be the identification of objects in a physical scene. Such objects can include humans, cars, buildings, signs, or portions thereof. For example, in a mapping context, it can be useful to automatically identify street signs, because such street signs can be utilized to verify the correctness of the digital map. As another example, it can be useful to identify humans in photographs before making such photographs public in order to be able to blur the faces of such humans, or to perform other image modification to protect the privacy of such humans.
Traditionally, the automated detection of objects in images required a two-step process. In a first step, the image data was analyzed to identify those portions of the image that were believed to be a representation of physical objects from the physical scene depicted in the image. In a second step, additional information in the form of geometric data would be utilized to double check and remove false positives from the identification of objects in the first step. The second step could not add back in objects that were not initially identified the first step. Furthermore, the second step could introduce further inaccuracies by, for example, incorrectly removing objects that were correctly identified by the first step, or incorrectly leaving in identifications from the first step that were not correct and did not identify an actual, physical object from the physical scene depicted in the image.