Users are increasingly utilizing electronic devices to obtain various types of information. For example, a user wanting to obtain information about an object can capture an image of the object and upload that image to an identification service for analysis. In certain situations, the object represented in the image will be compared against a set of images including views of objects from a particular orientation. While some objects are relatively easy to match, other objects are not as straightforward. For example, an object such as a pair of boots might be imaged from several different orientations, with many of those orientations not matching the orientation of the stored image for that type or style of boot. These differences in orientation, size, and shape, among other such differences, can prevent accurate matches from being found for various images captured by a user. Conventional approaches typically require the user to be able to identify the item or know relevant information about the item, and then perform a search for the item based on the identity or the relevant information for the item. If the user incorrectly identifies the item or if the user does not know sufficient information about the item, then the search cannot be performed appropriately. Moreover, conventional approaches typically require the user to manually input the identity or the relevant information for item into the computing device for the device to perform the search. These and other concerns can reduce the overall user experience associated with using computing devices to obtain information about items