With modern computing platforms and technologies shifting towards mobile and wearable devices that include camera sensors as native acquisition input streams, the desire to record and preserve moments digitally in a different form than more traditional two-dimensional (2D) flat images and videos has become more apparent. Traditional digital media formats typically limit their viewers to a passive experience. For instance, a 2D flat image can be viewed from one angle and is limited to zooming in and out. Accordingly, traditional digital media formats, such as 2D flat images, do not easily lend themselves to reproducing memories and events with high fidelity.
Producing a stereoscopic image (or “stereogram”) allows a user to experience depth within view an object and/or scenery. Most previously existing methods of creating a stereoscopic image involve capturing two offset images (a “stereo pair”) and presenting them separately to the left eye and the right eye of the user. Such two-dimensional images may then be combined in the user's brain to give the perception of three-dimensional depth. Such approaches may require specialized camera or devices to capture two offset images of a scene or object, which are stored for each view. This may also increase storage requirements and limit the efficiency of these methods in processing speed as well as transfer rates when sending it over a network. Accordingly, improved mechanisms for generating stereoscopic image pairs are desirable and provided herein.