The present invention relates to the field of panoramic media capture and, more particularly, to improving stereographic image detail uniformity of a panoramic video capture device having an annular optical gradient.
Panoramic cameras often utilize traditional image sensors to capture a 360 degree view of a real world environment. These image sensors produce a stereographic projection of the real world view as an image. For example, images captured with a 360 degree camera appear as a “little world” doughnut image (e.g., circular image). The characteristics of the projection include resolution, image size, and the like. One property of the projection is often non-uniform distribution of image detail per sensor capture region, which results from environmental light being distorted before being directed an image capture sensor. In other words, an intensity of the light (or an amount of image detail contained within a squared unit of light) striking the image capture sensor is non-uniform.
To explain by example, FIG. 1A (Prior Art) shows a panoramic camera 110 can have an image sensor 111. Image sensor 111, which converts an optical image contained in light into an electronic signal, conventionally is designed for uniform sensor sensitivity 112 per area. That is, an amount of data able to be captured from light striking the image sensor 111 is uniform regardless of which region of the overall image sensor 111 that light strikes. This convention is proper, when the light being directed to the image sensor 111 is of uniform intensity (having uniform image detail) relative to the environmental images being captured.
When an intensity of light is altered prior to being directed to the image sensor 111, however, problems may occur. For example, the panoramic camera 110 of FIG. 1A utilizes a parabolic reflector 109 to reflect light before that light is directed to the image sensor 111. Thus the parabolic reflector 109 alters the lights intensity (amount of image data per unit of light) that strikes regions of the image sensor 111 in a non-uniform manner. The end result is an existence of an annular optical gradient being applied to the intensity of the light. Conventional software that digitally processes the electronic signals representing the image produces an image with differing density (more image detail shown in portions of the image than in other portions relative to the image detail of the real world object) as a result of the annular optical gradient applied to the intensity of the light striking the image sensor 111.
The donut image 115 produced from the image data captured by the image sensor 111 has a non-uniform resolution (e.g., image detail density), where resolution refers to an amount of detail that the donut image 115 holds. The donut image 115 as conventionally produced can have a high image-detail density 113 at the edges and low image-detail density 114 towards the center of the image, as shown.
It should be appreciated that there can be two basic images with associated densities that vary depending on a reference plane. One reference plane is the reference plane at the image sensor, referred to as the sensor reference plane. Another reference plane is the reference plane of an unwrapped image, referred to as the stereographic reference plane.
At the sensor reference plane, the captured image (115) has uniform pixel density per area since pixel density is defined the sensor density, which for sensor 111 is uniform (112). That is, the amount of data recorded per area digitally represented in the electronic signal of the image produced by the image sensor 111 is uniform assuming a uniform geometry of the image sensor, meaning the sensor has uniform sensitivity. This does not mean that the same level of image detail per area is captured since the annular optical gradient distorted the image of the light striking the image sensor 111. Information loss can occur at this sensor reference plane, as the intensity of the light (image detail per square unit of light) striking the sensor 111 is not of uniform intensity. Of course, some loss always occurs when converting an optical image (which is analog) to an electronic signal (which digitally represents data of the analog image). When light striking the image sensor 111 is of uniform intensity (same level of image detail per square unit of light), the resulting loss in the sensor reference plane is also uniform. In panoramic camera 110, however, because of the annular optical gradient, the loss at the sensor reference plane is non-uniform as areas of the image sensor 111 converting light having a greater image detail per square unit experience a greater data loss per area during the analog to digital (A/D) conversion than other areas of the image sensor 111 that converts light of a lessor intensity. Pixel degradation curve 116 is taken from the perspective of the sensor reference plane and represents an effective degradation over area of loss of information in producing the donut image 115. The loss at the edges of the donut image 115, where light intensity was greater due to the annular optical gradient are higher compared to the loss at the center. Different optics of the panoramic camera will produce different pixel degradation curves.
The second reference plane is the stereographic reference plane. In this plane the electronic data (of uniform pixel density) is digitally processed so that optical distortions caused by the parabolic reflector 109 are mathematically reversed. In other words the “raw” donut image 115 is unwrapped to create a flattened image. The stereographic reference plane reverses the effects of the annular optical gradient (the bending of light by the parabolic reflector 109) to provide a pixilated processed image.
To give an imperfect analogy, the stereographic reference plane is effectively the equivalent of taking a rectangular drawing of an image written in pen on a pre-inflated balloon. When the balloon is inflated into a circle, the image is distorted non-uniformly. If the drawing was created using uniform pixels, the portions of the balloon that are most inflated, will have pixels spaced further apart than the portions of the balloon that are less distorted by the inflating. The “edges” of the drawing, therefore have more space per pixel. Since the flat image is still a pixilated image having a uniform distribution of pixels, a smoothing function adding additional pixels to fill in the spaces at the outer edge needs to be applied. Thus, the flat image resulting from unwrapping the donut image 115 requires more pixels to be added to those regions that experienced the greatest loss during the A/D conversion, due to the annular optical gradient. The portions of the donut image 115 attempting to compress the greatest amount of data into the same relative space (high density region 113), which represents the greatest amount of A/D loss, will need more pixels added per area when unwrapped into a flat image compared to the region of the donut image 115 labeled as low density, which represents the lowest amount of A/D loss.
This problem resulting from annular optical distortions as detailed herein is not known to be widely recognized outside this disclosure, nor are the solutions presented herein known to exist in known prior art.