Currently available mapping systems allow the user to interact within a three-dimensional space to explore an area such as a streetscape. In simple implementations, mapping systems present the user with a viewport including a three-dimensional rendering of a given street generated with generic graphics. The generic graphics rendered within a three-dimensional street may include three-dimensional polygon representations of buildings known to exist along the street and projected satellite imagery from above. The mapping system may render these generic graphics from large databases of satellite imagery.
However, due to the relatively low quality of satellite imagery from above when viewed at the micro-level of a street, some current mapping systems supplement the generic graphics with user generated imagery, for example two-dimensional photographs and three-dimensional panoramas. These two-dimensional photographs and three-dimensional panoramas provide high quality imagery that may not be available using generic satellite databases, and may provide a level of imagery resolution that is not available using three-dimensional polygon representations of buildings. Users may submit these photographs and panoramas voluntarily to the mapping service, for example, through crowd-sourcing efforts, or the mapping service may commission a large-scale effort to capture street-level photographic and panoramic imagery.
More advanced mapping systems may project these user generated two-dimensional and three-dimensional photographs together onto the three-dimensional space, for example a streetscape, to better allow the user to explore the three-dimensional space. The user may select the user-generated imagery projected onto the three-dimensional space in order to interact with the user-generated imagery independent of the three-dimensional space.
One issue that may arise when selecting two-dimensional or three-dimensional imagery is that the user may not understand whether the imagery is in fact two-dimensional or three-dimensional or how to best navigate the user-generated imagery. For example, a user may click on and navigate a user-generated two-dimensional photo by first selecting the photo in the three-dimensional space and panning the photo in two dimensions. The user may alternatively click on and navigate a user-generated three-dimensional panorama in the three-dimensional space by rotating the panorama in three dimensions. When the user selects the user-generated imagery in the three-dimensional space, the user may not understand which user-generated imagery is two-dimensional and which user-generated imagery is three-dimensional. Thus, the user may have no indication or instructions to navigate the selected user-generated imagery in two-dimensions or three-dimensions in the most efficient or intuitive means possible.
Some mapping systems may indicate, through a label or other obtrusive rendered indicator, that the user may navigate the user-generated imagery in two or three dimensions. However, these labels or rendered indicators typically obscure the image or distract the user from the image. Alternatively, the label or rendered indicator may not be immediately apparent to the user, presenting the same issue as providing no indicator whatsoever.