360-degree cameras (e.g., Ricoh Theta, Insta360, GoPro Fusion, Vuze VR Camera, etc.) capture panoramic pictures and videos (e.g., 180 degree and 360 degree). The panoramic pictures are created by assembling a number of separate pictures or video frames and combining them into a view of a scene that can be viewed by a viewer on a display. The result of this assembling of images/frames results in a full sphere (or hemisphere in the case of 180-degree content) that a viewer can view in a seemingly immersive environment. Rotation of the viewer's head allows for viewing of the image around the 180 or 360 degrees. This, however, places the viewer strictly at the center point to view the content. A viewer need only look up, down, left, right or even rotate his head to view the content.
The problem, however, is that content browsers assume that the camera center and the viewer content are in the same location. This is a natural implementation that represents a “what was captured is what is viewed”. This approach introduces visual discontinuities with real-world behavior that can lead to discomfort for some viewers and nausea for others. The issue is that by locking the camera to the viewer, the only degrees of freedom that can be used are those of orientation (roll, pitch, yaw). The viewer, within the immersive view, can rotate around the center appropriately, but cannot move translationally. If the viewer moves into a new translational position (for example steps to the side or bends down to look more closely at an object) the entire viewable world moves with him (retaining the same viewable perspective regardless of the translation), causing objects to move in a non-intuitive way. For many people, this effect is so disconcerting that their bodies become rigid while viewing immersive content; therefore, the viewer decides to only move their head. Over time this lessens the enjoyment of the experience and may lead some viewers to lose interest, become sick, or in some instances injure themselves.