With advances in camera technology and the rise of camera-enabled mobile devices, users are capturing more digital images than ever. In fact, it is not uncommon for users to have tens, if not hundreds, of gigabytes of digital images stored on their computing devices and/or in the cloud. Because of the vast quantity of digital images, it is near impossible for users to manually sort and classify images. Accordingly, computing systems are required to analyze, classify, organize, and manage digital images in a way that makes it easy for users to find digital images in an efficient manner. For example, conventional systems utilize facial recognition and object detection to analyze an image and identify the people and/or objects portrayed within the image. The systems can then categorize and/or search for images to provide to a user (e.g., in response to a search query), thus making a collection of digital images more manageable and useful for the user.
However, conventional processes and systems for analyzing and classifying digital images require an enormous amount of processing power and other computational resources. In particular, conventional systems typically require a collection of server devices dedicated to the process of analyzing digital images in order to detect objects/faces within the digital images, and then to classify and index the digital images accordingly. Because of these computational requirements, client devices (e.g., mobile devices) are often unable to adequately perform digital image analysis.
Thus, there are disadvantages with regard to conventional digital image analysis systems.