Obtaining images via cameras or cameras integrated within devices such as mobile phones or tablets or the like is very common. In some instances, the imaging device may be equipped with smart object tracking such that high quality images and videos of moving objects may be attained. Some implementations may allow for tracking of a single object while others may allow for tracking multiple objects in real time. In such multiple object tracking implementations, due to various limitations such as physical limitations of the optical devices and/or image processing units, a single object may be selected for 3A (e.g., auto focus, auto exposure and auto white balance) adjustments during capture.
Selection of a target object from the multiple tracked objects may be performed by the user. For example, a user may select a target object from among several objects that are being tracked. The scene including the tracked objects may be displayed to a user via a display and, in some cases, an indicator (e.g., a box around a tracked object) may be displayed that indicates the object is being tracked and may be selected by the user. The user may select from the objects using an input device such as a touch screen that also displays the scene including the objects and optional indicators. As discussed, the selected target object may be tracked for image or video capture.
However, selecting a target object from the multiple tracked objects may be difficult for users, particularly when the objects are fast moving. Furthermore, the user input such as the touch interface may make the camera unsteady, which may negatively affect object tracking and/or image capture by the camera.
As such, existing techniques do not provide for easy and robust selection of a target object from multiple tracked objects. Such problems may become critical as the desire to easily and quickly obtain aesthetically pleasing images in a variety of device implementations becomes more widespread.