In a typical Virtual Reality (VR) application, a 360-degree panoramic image or video is captured. A user wearing special goggles such as a Head-Mounted-Display (HMD) can actively select and vary his viewpoint to get an immersive experience in a 360-degree panoramic space. A wide variety of interesting and useful applications are possible as VR camera technology improves and shrinks.
Higher resolution cameras are being used to capture panoramic images. More pixels need to be stored and transmitted for these higher-resolution panoramic images. Various coding methods are known to compress these panoramic images. Coding methods such as cubemap, adjusting tile projection, and pseudo-cylindrical projection are known.
FIGS. 1A-1C show panoramic cameras. FIG. 1A shows a prior-art VR ring camera. Ring camera 14 has multiple cameras 17 arranged in a ring. This arrangement of cameras 17 allows for a 360-degree panorama to be captured. When cameras 17 are video cameras, a panoramic video is captured. The Google Jump is an example of a VR ring camera.
In FIG. 1B, the ring camera of FIG. 1A has a ring of High-Resolution (HR) cameras 17 that generate HR images 18, each of a small arc of the full panoramic circle. HR images 18 overlap each other and details from two of HR images 18 are combined in some manner in stitch regions 19. However, since the ring camera has cameras 17 arranged in a two-dimensional ring, some image loss or distortion may occur at the tops and bottoms of HR images 18.
FIG. 1C shows a spherical camera. Spherical camera 16 has a ring of multiple cameras 17 along the equatorial ring, similar to that of FIG. 1A. Additional non-equatorial cameras 15 are placed above and below the equatorial ring of cameras 17. These additional non-equatorial cameras 15 can better capture images details near the polar regions.
FIG. 2 shows an equi-rectangular projection of a panoramic image. The true spherical image 102 has original features 106 that are shown as dots of equal size and shape. When true spherical image 102 is captured by a ring camera or a spherical camera and projected into a VR space, distortion can occur due to the projection. In equi-rectangular projection space 104, original features 106 that are near the equator are relatively undistorted, such as equatorial features 110. However, original features 106 that are farther from the equator are relatively distorted, such as polar features 108. The amount of distortion increases with the distance from the equator, so very noticeable distortion occurs near the poles. Original pixels from true spherical image 102 are stretched out or replicated to cover more area in equi-rectangular projection space 104, making polar objects appear larger than they really are in true spherical image 102. Redundant information exists in the polar regions in equi-rectangular projection space 104, and the poles have more pixels that the equator. This uneven distribution of pixels and distortion is undesirable.
FIGS. 3A-3B highlight distortion of polar objects in a equi-rectangular projection. In FIG. 3A, polar object 112 in equi-rectangular projection space 104 is stretched and appears much larger than the actual polar object 114 in true spherical image 102. Aside from the visual distortion, too many pixels are present in polar object 112 in equi-rectangular projection space 104, which is wasteful of storage and bandwidth. Thus equi-rectangular projection is undesirable.
FIG. 4 shows a pseudo-cylindrical projection. True spherical image 102 (FIG. 2) is mapped using a sinusoidal projection onto pseudo-cylindrical projection 130. Horizontal parallels 132 are straight and have the same lengths as in true spherical image 102. Central vertical meridian 134 is straight while other vertical lines are sinusoidal. The surface area of pseudo-cylindrical projection 130 is the same as the surface area of true spherical image 102. Polar objects 136 have the same area as actual polar object 114 in true spherical image 102.
While useful, pseudo-cylindrical projection 130 offers no reduction in the size of equi-rectangular projection space 104, since the areas are the same. The same pixel storage and bandwidth is required when using either equi-rectangular projection space 104 or pseudo-cylindrical projection 130 as the panoramic image.
What is desired is compression of a pseudo-cylindrical projection for use with Virtual Reality (VR) systems and other panoramic image systems. A compressed pseudo-cylindrical projection that can be used when storing and transmitting panoramic images is desired. Reduction in storage requirements of a panoramic image is desirable.