Panoramic camera, which captures the 360° view of scenic places such as tourist resorts, is an example of a panoramic imaging system. Panoramic imaging system is an imaging system that captures the views one could get by making one complete turn-around from a given spot. On the other hand, omnidirectional imaging system captures the view of every possible direction from a given spot. Omnidirectional imaging system provides a view that a person could observe from a given position by turning around as well as looking up and down. In a mathematical terminology, the solid angle of the region that can be captured by the imaging system is 4π steradian.
There have been a lot of studies and developments of panoramic imaging systems not only in the traditional areas such as photographing buildings, nature scenes, and heavenly bodies, but also in security/surveillance systems using CCD (charge-coupled device) or CMOS (complementary metal-oxide-semiconductor) cameras, virtual touring of real estates, hotels and tourist resorts, and navigational aids for mobile robots and unmanned aerial vehicles (UAV).
As a viable method of obtaining panoramic images, people are actively researching on catadioptric panoramic imaging systems, which are imaging systems employing both mirrors and refractive lenses. Shown in FIG. 1 is a schematic diagram of a general catadioptric panoramic imaging system. As schematically shown in FIG. 1, a catadioptric panoramic imaging system (100) of prior arts includes as constituent elements a rotationally symmetric panoramic mirror (111), of which the cross-sectional profile is close to an hyperbola, a lens (112) that is located on the rotational-symmetry axis (101) of the mirror (111) and oriented toward the said mirror (111), and an image acquisition means that includes a camera body (114) having an image sensor (113) inside. Then, an incident ray (105) having an altitude angle δ, which originates from every 360° directions around the mirror and propagates toward the rotational-symmetry axis (101), is reflected on a point M on the mirror surface (111), and captured by the image sensor (113) as a reflected ray (106) having a zenith angle θ with respect to the rotational-symmetry axis (101). Here, the altitude angle refers to an angle measured from the ground plane (i.e., X-Y plane) toward the zenith. FIG. 2 is a conceptual drawing of an exemplary rural landscape obtainable using the catadioptric panoramic imaging system (100) of prior art schematically shown in FIG. 1. As illustrated in FIG. 2, a photographic film or an image sensor (213) has a square or a rectangular shape, while a panoramic image (233) obtained using a panoramic imaging system (100) has an annular shape. Non-hatched region in FIG. 2 constitutes the panoramic image, and the hatched circle in the center corresponds to the area at the backside of the camera, which is not captured because the camera body occludes its view. Within this circle lies the image of the camera itself reflected by the mirror (111). On the other hand, the hatched regions at the four corners originate from the fact that the diagonal field of view of the camera lens (112) is larger than the field of view of the panoramic mirror (111). In this region lies the image of the area at the front side of the camera observable in absence of the panoramic mirror. FIG. 3 is an exemplary unwrapped panoramic image (334) obtained from the ring-shaped panoramic image (233) by cutting along the cutting-line (233c) and converting into a perspectively normal view using an image processing software.
For the unwrapped panoramic image (334) in FIG. 3 to appear natural to the naked eye, the raw panoramic image (233) prior to the unwrapping operation must be captured by a panoramic lens following a certain projection scheme. Here, a panoramic lens refers to a complex lens comprised of a panoramic mirror (111) and a refractive lens (112). FIG. 4 is a conceptual drawing of an object plane (431) employed in a panoramic imaging system following a rectilinear projection scheme, and FIG. 5 is a conceptual drawing of a raw panoramic image (533) obtained by capturing the scene on the object plane of FIG. 4 using the said panoramic imaging system. In such a rectilinear panoramic imaging system, a cylindrical object plane (131, 431) is assumed, of which the rotational symmetry axis coincides with the optical axis of the panoramic lens. In FIG. 4, it is preferable that the rotational symmetry axis of the cylindrical object plane (431) is perpendicular to the ground plane (417).
Referring to FIG. 1 and FIG. 4, the radius of the said object plane (131) is S, and the panoramic lens comprised of a panoramic mirror (111) and a refractive lens (112) forms the image of the point (104) of an object lying on the said object plane (131), in other words, the image point P, on the focal plane (132). To obtain a sharp image, the sensor plane (113) of the image sensor must coincide with the said focal plane (132). A ray (106) that arrives at the said image point P is first reflected at a point M on the panoramic mirror (111) and passes through the nodal point N of the refractive lens (112). Here, the nodal point is the position of a pinhole when the camera is approximated as an ideal pinhole camera. The distance from the nodal point to the focal plane (132) is approximately equal to the effective focal length f of the refractive lens (112). For the simplicity of argument, we will refer the ray (105) before the reflection at the mirror as the incident ray, and the ray after the reflection as the reflected ray (106). If the reflected ray has a zenith angle θ with respect to the optical axis (101) of the camera, then, the distance r from the center of the sensor plane (113), in other words, the intersection point O between the sensor plane (113) and the optical axis (101), to the point P on the image sensor plane, whereon the reflected ray (106) is captured, is given by Eq. 1.r=f tan θ  [Math Figure 1]
For a panoramic lens following a rectilinear projection scheme, the height in the object plane (131), in other words, the distance Z measured parallel to the optical axis, is proportional to the distance r on the sensor plane. The axial radius of the point M on the panoramic mirror surface (111), whereon the reflection has occurred, is ρ, and the height is z, and the axial radius of the corresponding point (104) on the object plane (131) is S, and the height is Z. Since the altitude angle of the said incident ray (105) is δ, the height Z of the said object is given by Eq. 2.Z=z+(S−ρ)tan δ  [Math Figure 2]
If the distance from the camera to the object plane is large compared to the size of the camera (i.e., S>>ρ, Z>>z), then Eq. 2 can be approximated as Eq. 3.Z≅S tan δ  [Math Figure 3]
Therefore, if the radius S of the object plane is fixed, then the height of the object (i.e., the object size) is proportional to tan δ, and the axial radius of the corresponding image point (i.e., the image size) on the focal plane is proportional to tan θ. If tan δ is proportional to tan θ in this manner, then the image of the object on the object plane is captured on the image sensor with its vertical proportions preserved. Incidentally, referring to FIG. 1, it can be noticed that both the altitude angle of the incident ray and the zenith angle of the reflected ray have upper bounds and lower bounds. If the range of the altitude angle of the incident ray is from δ1 to δ2(δ1≦δ≦δ2), and the range of the zenith angle of the reflected ray is from θ1 to θ2(θ1≦θ≦θ2), then the range of the corresponding object height on the object plane is from Z1=S tan δ1 to Z2=S tan δ2(Z1≦Z≦Z2), and the range of the axial radius of the image point on the focal plane is from r1=f tan θ1 to r2=f tan θ2(r1≦r≦r2). The projection scheme for these r and Z to be in proportion to each other is given by Eq. 4.
                              r          ⁡                      (            δ            )                          =                              r            1                    +                                                                      r                  2                                -                                  r                  1                                                                              tan                  ⁢                                                                          ⁢                                      δ                    2                                                  -                                  tan                  ⁢                                                                          ⁢                                      δ                    1                                                                        ⁢                          (                                                tan                  ⁢                                                                          ⁢                  δ                                -                                  tan                  ⁢                                                                          ⁢                                      δ                    1                                                              )                                                          [                  Math          ⁢                                          ⁢          Figure          ⁢                                          ⁢          4                ]            
Therefore, a most natural panoramic image can be obtained when the panoramic lens implements the rectilinear projection scheme given by Eq. 4. One disadvantage of such a panoramic imaging system is that there are considerable numbers of unused pixels in the image sensor. FIG. 6 is a schematic diagram illustrating the degree of pixel utilization on the sensor plane (613) of an image sensor having the standard 4:3 aspect ratio. Image sensors with the ratio of the lateral side B to the longitudinal side V equal to 1:1 or 16:9 are not many in kinds nor cheap in price, and most of the image sensors are manufactured with the ratio of 4:3. Assuming an image sensor having the 4:3 aspect ratio, the area A1 of the image sensor plane (613) is given by Eq. 5.A1=BV= 4/3V2  [Math Figure 5]
On such an image sensor plane, the panoramic image (633) is formed between the outer rim (633b) and the inner rim (633a) of an annular region, where the two rims constitute concentric circles. Here, the said sensor plane (613) coincides with a part of the focal plane (632) of the len, and the said panoramic image (633) exists on a part of the sensor plane (613). In FIG. 6, the outer radius of the panoramic image (633) is r2, and the inner radius is r1. Therefore, the area A2 of the panoramic image is given by Eq. 6.A2=π(r22−r12)  [Math Figure 6]
Referring to FIG. 2 and FIG. 5, the height of the unwrapped panoramic image is given by the difference between the outer radius and the inner radius, in other words, by r2−r1. On the other hand, the lateral dimension of the unwrapped panoramic image is given by 2πr1 or 2πr2, depending on which radius is taken as a base. Therefore, the outer radius r2 and the inner radius r1 must have an appropriate ratio, and 2:1 can be considered as a proper ratio. Furthermore, to make the maximum use of pixels, it is desirable that the outer rim (633b) of the panoramic image contacts the lateral sides of the image sensor plane (613). Therefore, it is preferable that r2=(1/2)V, and r1=(1/2)r2=(1/4)V. Under these conditions, the area of the panoramic image (633) is given by Eq. 7.
                              A          2                =                              π            ⁢                          {                                                                    (                                                                  1                        2                                            ⁢                      V                                        )                                    2                                -                                                      (                                                                  1                        4                                            ⁢                      V                                        )                                    2                                            }                                =                                                    3                ⁢                                                                  ⁢                π                            16                        ⁢                          V              2                                                          [                  Math          ⁢                                          ⁢          Figure          ⁢                                          ⁢          7                ]            
Therefore, the ratio between the area A2 of the panoramic image (633) and the area A1 of the image sensor plane (613) is given by Eq. 8.
                                          A            2                                A            1                          =                                                                              3                  ⁢                                                                          ⁢                  π                                16                            ⁢                              V                2                                                                    4                3                            ⁢                              V                2                                              =                                                    9                ⁢                                                                  ⁢                π                            64                        ≅            0.442                                              [                  Math          ⁢                                          ⁢          Figure          ⁢                                          ⁢          8                ]            
Thus, the percentage of pixel utilization is less than 50%, and the panoramic imaging systems of prior arts have a disadvantage in that pixels are not efficiently used.
Another method of obtaining a panoramic image is to employ a fisheye lens with a wide field of view (FOV). For example, the entire sky and the horizon can be captured in a single image by pointing a camera equipped with a fisheye lens with 180° FOV toward the zenith (i.e., the optical axis of the camera is aligned perpendicular to the ground plane). On this reason, fisheye lenses have been often referred to as “all-sky lenses”. Particularly, a high-end fisheye lens by Nikon, namely, 6 mm f/5.6 Fisheye-Nikkor, has a FOV of 220°. Therefore, a camera mounted with this lens can capture even a portion of the backside of the camera as well as the front side of the camera. Then, a panoramic image can be obtained from thus obtained fisheye image by the same methods as illustrated in FIG. 2 and FIG. 3.
In many cases, imaging systems are installed on vertical walls. Imaging systems installed on the outside walls of a building for the purpose of monitoring the surroundings, or a rear view camera for monitoring the backside of a passenger car are such examples. In such cases, it is inefficient if the horizontal field of view is significantly larger than 180°. This is because a wall, which is not needed to be monitored, takes up a large space in the monitor screen, pixels are wasted, and screen appears dull. Therefore, a horizontal FOV around 180° is more appropriate for such cases. Nevertheless, a fisheye lens with 180° FOV is not desirable for such application. This is because the barrel distortion, which accompanies a fisheye lens, evokes psychological discomfort and abhorred by the consumer.
An example of an imaging system, which can be installed on an interior wall for the purpose of monitoring the entire room, is given by a pan•tilt•zoom camera. Such a camera is comprised of a video camera, which is equipped with an optical zoom lens, mounted on a pan•tilt stage. Pan is an operation of rotating in the horizontal direction for a given angle, and tilt is an operation of rotating in the vertical direction for a given angle. In other words, if we assume that the camera is at the center of a celestial sphere, then pan is an operation of changing the longitude, and tilt is an operation of changing the latitude. Therefore, the theoretical range of pan operation is 360°, and the theoretical range of tilt operation is 180°. The shortcomings of a pan•tilt•zoom camera include high price, large size and heavy weight. Optical zoom lens is large, heavy and expensive due to the difficulty in design and the complicated structure. Also, a pan•tilt stage is an expensive device not cheaper than a camera. Therefore, it cost a considerable sum of money to install a pan•tilt•zoom camera. Furthermore, since a pan•tilt•zoom camera is large and heavy, this fact can become a serious impediment to certain applications. Examples of such cases include airplanes where the weight of the payload is of critical importance, or when a strict size limitation exists in order to install a camera in a confined space. Furthermore, pan•tilt•zoom operation takes a time because it is a mechanical operation. Therefore, depending on the particular application at hand, such a mechanical response may not be fast enough.
References 1 and 2 provide fundamental technologies of extracting an image having a particular viewpoint or projection scheme from an image having other than the desirable viewpoint or projection scheme. Specifically, reference 2 provides an example of a cubic panorama. In short, a cubic panorama is a special technique of illustration wherein the observer is assumed to be located at the very center of an imaginary cubic room made of glass, and the outside view from the center of the glass room is directly transcribed on the region of the glass wall whereon the ray vector from the object to the observer meets the glass wall. Furthermore, an example of a more advanced technology is provided in the above reference, wherewith reflections from an arbitrarily shaped mirrored surface can be calculated. Specifically, the author of reference 2 created an imaginary lizard having a highly reflective mirror-like skin as if made of a metal surface, then set-up an observer's viewpoint separated from the lizard, and calculated the view of the imaginary environment reflected on the lizard skin from the viewpoint of the imaginary observer. However, the environment was not a real environment captured by an optical lens, but a computer-created imaginary environment captured with an imaginary distortion-free pinhole camera.
On the other hand, an imaging system is described in reference 3 that is able to perform pan•tilt•zoom operations without a physically moving part. The said invention uses a camera equipped with a fisheye lens with more than 180° FOV in order to take a picture of the environment. Then, the user designates a principal direction of vision using various devices such as a joystick, upon which, the computer extracts a rectilinear image from the fisheye image that could be obtained by heading a distortion-free camera to that particular direction. The main difference between this invention and the prior arts is that this invention creates a rectilinear image corresponding to the particular direction the user has designated using devices such as a joystick or a computer mouse. Such a technology is essential in the field of virtual reality, or when it is desirable to replace mechanical pan•tilt•zoom camera, and the keyword is “interactive picture”. In this technology, there are no physically moving parts in the camera. As a consequence, the system response is fast, and there is less chance of mechanical failure.
Ordinarily, when an imaging system such as a security camera is installed, a cautionary measure is taken so that vertical lines perpendicular to the horizontal plane also appear vertical in the acquired image. In such a case, vertical lines still appear vertical even as mechanical pan•tilt•zoom operation is performed. On the other hand, in the said invention, vertical lines generally do not appear as vertical lines after software pan•tilt•zoom operation has been performed. To remedy such an unnatural result, a rotate operation is additionally performed, which is not found in a mechanical pan•tilt•zoom camera. Furthermore, the said invention does not provide the exact amount of rotate angle that is needed in order to display vertical lines as vertical lines. Therefore, the exact rotation angle must be found in a trial-and-error method in order to display vertical lines as vertical lines.
Furthermore, the said invention assumes that the projection scheme of the fisheye lens is an ideal equidistance projection scheme. But, the real projection scheme of a fisheye lens generally shows a considerable deviation from an ideal equidistance projection scheme. Since the said invention does not take into account the distortion characteristics of a real lens, images obtained after image processing still shows distortion.
The invention described in reference 4 remedies the shortcoming of the invention described in reference 3, namely the inability of taking into account the real projection scheme of a fisheye lens used in image processing. Nevertheless, the defect of not showing vertical lines as vertical lines in the monitor screen has not been resolved.
From another point of view, all animals and plants including human are bound on the surface of the earth due to the gravitational pull, and most of the events, which need attention or cautionary measure, take place near the horizon. Therefore, even though it is necessary to monitor every 360° direction on the horizon, it is not as important to monitor high along the vertical direction, for example, as high as to the zenith or deep down to the nadir. Distortion is unavoidable if we want to describe the scene of every 360° direction on a two-dimensional plane. Similar difficulty exists in the cartography where geography on earth, which is a structure on the surface of a sphere, needs to be mapped on a planar two-dimensional atlas. Among all the distortions, the distortion that appears most unnatural to the people is the distortion where vertical lines appear as curved lines. Therefore, even if other kinds of distortions are present, it is important to make sure that such a distortion is absent.
Described in reference 5 are the well-known map projection schemes among the diverse map projection schemes such as equi-rectangular projection, Mercator projection and cylindrical projection schemes, and reference 6 provides a brief history of diverse map projection schemes. Among these, the equi-rectangular projection scheme is the projection scheme most familiar to us when we describe the geography on the earth, or when we draw the celestial sphere in order to make a map of the constellation.
Referring to FIG. 7, if we assume the surface of the earth is a spherical surface with a radius S, then an arbitrary point Q on the earth's surface has a longitude Ψ and a latitude δ. On the other hand, FIG. 8 is a schematic diagram of a planar map drawn according to the equi-rectangular projection scheme. A point Q on the earth's surface having a longitude Ψ and a latitude δ has a corresponding point P on the planar map (834) drawn according to the equi-rectangular projection scheme. The rectangular coordinate of this corresponding point is given as (x, y). Furthermore, the reference point on the equator having a longitude 0° and a latitude 0° has a corresponding point O on the planar map, and this corresponding point O is the origin of the rectangular coordinate system. Here, according to the equi-rectangular projection scheme, the same interval in the longitude (i.e., the same angular distance along the equator) corresponds to the same lateral interval on the planar map. In other words, the lateral coordinate x on the planar map (834) is proportional to the longitude.x=cΨ  [Math Figure 9]
Here, c is proportionality constant. Also, the longitudinal coordinate y is proportional to the latitude, and has the same proportionality constant as the lateral coordinate.y=cδ  [Math Figure 10]
The span of the longitude is 360° ranging from −180° to +180°, and the span of the latitude is 180° ranging from −90° to +90°. Therefore, a map drawn according to the equi-rectangular projection scheme must have a width W:height H ratio of 360:180=2:1. Furthermore, if the proportionality constant c is given as the radius S of the earth, then the width of the said planar map is given as the perimeter of the earth measured along the equator as given in Eq. 11.W=2πS  [Math Figure 11]
Such an equi-rectangular projection scheme appears as a natural projection scheme considering the fact that the earth's surface is close to the surface of a sphere. Nevertheless, it is disadvantageous in that the size of a geographical area is greatly distorted. For example, two very close points near the North Pole can appear as if they are on the opposite sides of the earth in a map drawn according to the equi-rectangular projection scheme.
On the other hand, in a map drawn according to the Mercator projection scheme, the longitudinal coordinate is given as a complex function given in Eq. 12.
                    y        =                  c          ⁢                                          ⁢          ln          ⁢                      {                          tan              ⁡                              (                                                      π                    4                                    +                                      δ                    2                                                  )                                      }                                              [                  Math          ⁢                                          ⁢          Figure          ⁢                                          ⁢          12                ]            
On the other hand, FIG. 9 is a conceptual drawing of a cylindrical projection scheme or a panoramic perspective. In a cylindrical projection scheme, an imaginary observer is located at the center N of a celestial sphere (931) with a radius S, and it is desired to make a map of the celestial sphere centered on the observer, the map covering most of the region excluding the zenith and the nadir. In other words, the span of the longitude must be 360° ranging from −180° to +180°, but the range of the latitude can be narrower including the equator within its span. Specifically, the span of the latitude can be assumed as ranging from −Δ to +Δ, where Δ must be smaller than 90°.
In this projection scheme, a hypothetical cylindrical plane (934) is assumed which contacts the celestial sphere at the equator (903). Then, for a point Q(Ψ, δ) on the celestial sphere (931) having a given longitude Ψ and a latitude δ, a line segment connecting the center of the celestial sphere and the point Q is extended until it meets the said cylindrical plane. This intersection point is designated as P(Ψ, δ). In this manner, the corresponding point P on the cylindrical plane (934) can be obtained for every point Q on the celestial sphere (931) within the said latitude range. Then, a map having a cylindrical projection scheme is obtained by cutting the cylindrical plane and laying flat on a planar surface. Therefore, the lateral coordinate of the point P on the flattened-out cylindrical plane is given by Eq. 13, and the longitudinal coordinate y is given by Eq. 14.x=SΨ  [Math Figure 13]y=S tan δ  [Math Figure 14]
Such a cylindrical projection scheme is the natural projection scheme for a panoramic camera that produces a panoramic image by rotating in the horizontal plane. Especially, if the lens mounted on the rotating panoramic camera is a distortion-free rectilinear lens, then the resulting panoramic image exactly follows a cylindrical projection scheme. In principle, such a cylindrical projection scheme is the most accurate panoramic projection scheme. However, the panoramic image appears unnatural when the latitudinal range is large, and thus it is not widely used in practice.
Unwrapped panoramic image thus produced and having a cylindrical projection scheme has a lateral width W given by Eq. 11. On the other hand, if the range of the latitude is from δ1 to δ2, then the longitudinal height of the unwrapped panoramic image is given by Eq. 15.H=S(tan δ2−tan δ1)  [Math Figure 15]
Therefore, the following equation can be derived from Eq. 11 and Eq. 15.
                              W          H                =                              2            ⁢                                                  ⁢            π                                              tan              ⁢                                                          ⁢                              δ                2                                      -                          tan              ⁢                                                          ⁢                              δ                1                                                                        [                  Math          ⁢                                          ⁢          Figure          ⁢                                          ⁢          16                ]            
Therefore, an unwrapped panoramic image following a cylindrical projection scheme must satisfy Eq. 16.
FIG. 10 is an example of an unwrapped panoramic image given in reference 7, and FIG. 11 is an example of an unwrapped panoramic image given in reference 8. FIGS. 10 and 11 have been acquired using panoramic lenses following rectilinear projection schemes, or in the terminology of cartography, using panoramic lenses following cylindrical projection schemes. Therefore, in the panoramic images of FIG. 10 and FIG. 11, the longitudinal coordinate y is proportional to tan δ. On the other hand, by the structure of panoramic lenses, the lateral coordinate x is proportional to the longitude Ψ. Therefore, except for the proportionality constant, Eqs. 13 and 14 are satisfied.
In the example of FIG. 10, the lateral size is 2192 pixels, and the longitudinal size is 440 pixels. Therefore, 4.98 is obtained by calculating the LHS (left hand side) of Eq. 16. In FIG. 10, the range of the vertical incidence angle is from δ1=−70° to δ2=50°. Therefore, 1.60 is obtained by calculating the RHS (right hand side) of Eq. 16. Thus, the exemplary panoramic image in FIG. 10 does not satisfy the proportionality relation given by Eq. 16. On the other hand, in the example of FIG. 11, the lateral size is 2880 pixels, and the longitudinal size is 433 pixels. Therefore, 6.65 is obtained by calculating the LHS of Eq. 16. In FIG. 11, the range of the vertical incidence angle is from δ1=−23° to δ2=23°. Therefore, 7.40 is obtained by calculating the RHS of Eq. 16. Thus, although the error may be less than that of FIG. 10, still the exemplary panoramic image in FIG. 11 does not satisfy the proportionality relation given by Eq. 16.
It can be noticed that the unwrapped panoramic images given in FIG. 10 and FIG. 11 appear as natural panoramic images despite the fact that the panoramic images do not satisfy such a proportionality relation. This is because of the fact that in a panoramic image, the phenomenon of a line vertical to the ground plane (i.e., a vertical line) appearing as a curved line or as a slanted line is easily noticeable and causes viewer discomfort, but the phenomenon of the lateral and the vertical scales not matching to each other is not unpleasant to the eye in the same degree, because a reference for comparing the horizontal and the vertical directions does not usually exist in the environment around the camera.
All the animals, plants and inanimate objects such as buildings on the earth are under the influence of gravity, and the direction of gravitational force is the up-right direction or the vertical direction. Ground plane is fairly perpendicular to the gravitational force, but needless to say, it is not so on a slanted ground. Therefore, the word “ground plane” actually refers to the horizontal plane, and the vertical direction is the direction perpendicular to the horizontal plane. Even if we refer them as the ground plane, the lateral direction, and the longitudinal direction, for the sake of simplicity in argument, the ground plane must be understood as the horizontal plane, the vertical direction must be understood as the direction perpendicular to the horizontal plane, and the horizontal direction must be understood as a direction parallel to the horizontal plane, whenever an exact meaning of a term needs to be clarified.
Panoramic lenses described in references 7 and 8 take panoramic images in one shot with the optical axes of the panoramic lenses aligned vertical to the ground plane. Incidentally, a cheaper alternative to the panoramic image acquisition method by the previously described camera with a horizontally-rotating lens consist of taking an image with an ordinary camera with the optical axis horizontally aligned, and repeating to take pictures after horizontally rotating the optical axis by a certain amount. Four to eight pictures are taken in this way, and a panoramic image with a cylindrical projection scheme can be obtained by seamlessly joining the pictures consecutively. Such a technique is called stitching. QuickTime VR from Apple computer inc. is commercial software supporting this stitching technology. This method requires a complex, time-consuming, and elaborate operation of precisely joining several pictures and correcting the lens distortion.
According to the reference 9, another method of obtaining a panoramic or an omnidirectional image is to take a hemispherical image by horizontally pointing a camera equipped with a fisheye lens with more than 180° FOV, and then point the camera to the exact opposite direction and take another hemispherical image. By stitching the two images acquired by the camera using appropriate software, one omnidirectional image having the views of every direction (i.e., 4π steradian) can be obtained. By sending thus obtained image to a geographically separated remote user using communication means such as the Internet, the user can select his own viewpoint from the received omnidirectional image according to his own personal interest, and image processing software on the user's computing device can extract a partial image corresponding to the user-selected viewpoint, and a perspectively correct planar image can be displayed on the computing device. Therefore, using the image processing software, the user can make a choice of turning around (pan), looking-up or down (tilt), or taking a close (zoom in) or a remote (zoom out) view as if the user is actually present at the specific place in the image. This method has a distinctive advantage of multiple users accessing the same Internet site being able to take looks along the directions of their own choices. This advantage cannot be enjoyed in a panoramic imaging system employing a motion camera such as a pan•tilt camera.
References 10 and 11 describe a method of obtaining an omnidirectional image providing the views of every direction centered on the observer. Despite the lengthy description of the invention, however, the projection scheme provided by the said references is one kind of equidistance projection schemes in essence. In other words, the techniques described in the documents make it possible to obtain an omnidirectional image from a real environment or from a cubic panorama, but the obtained omnidirectional image follows an equidistance projection scheme only and its usefulness is thus limited.
On the other hand, reference 12 provides an algorithm for projecting an Omnimax movie on a semi-cylindrical screen using a fisheye lens. Especially, taking into account of the fact that the projection scheme of a fisheye lens mounted on a movie projector deviates from an ideal equidistance projection scheme, a method is described for locating the position of the object point on the film corresponding to a certain point on the screen whereon an image point is formed. Therefore, it is possible to calculate what image has to be on the film in order to project a particular image on the screen, and such an image on the film is produced using a computer. Especially, since the lens distortion is already reflected in the image-processing algorithm, a spectator near the movie projector can entertain himself with a satisfactory panoramic image. Nevertheless, the real projection scheme of the fisheye lens in the said reference is inconvenient to use because it has been modeled with the real image height on the film plane as the independent variable, and the zenith angle of the incident ray as the dependent variable. Furthermore, unnecessarily, the real projection scheme of the fisheye lens has been modeled only with odd polynomials.
Reference 13 provides examples of stereo panoramic images produced by Professor Paul Bourke. Each of the panoramic images follows a cylindrical projection scheme, and a panoramic image of an imaginary scene produced by a computer as well as a panoramic image produced by a rotating slit camera are presented. For panoramic images produced by a computer or produced by a traditional method of rotating slit camera, the lens distortion is not an important issue. However, rotating slit camera cannot be used to take a real-time panoramic image (i.e., movie) of a real world.    [reference 1] J. F. Blinn and M. E. Newell, “Texture and reflection in computer generated images”, Communications of the ACM, 19, 542-547 (1976).    [reference 2] N. Greene, “Environment mapping and other applications of world projections”, IEEE Computer Graphics and Applications, 6, 21-29 (1986).    [reference 3] S. D. Zimmermann, “Omniview motionless camera orientation system”, U.S. Pat. No. 5,185,667, date of patent Feb. 9, 1993.    [reference 4] E. Gullichsen and S. Wyshynski, “Wide-angle image dewarping method and apparatus”, U.S. Pat. No. 6,005,611, date of patent Dec. 21, 1999.    [reference 5] E. W. Weisstein, “Cylindrical Projection”, http://mathworld.wolfram.com/CylindricalProjection.html.    [reference 6] W. D. G. Cox, “An introduction to the theory of perspective—part 1”, The British Journal of Photography, 4, 628-634 (1969).    [reference 7] G. Kweon, K. Kim, Y. Choi, G. Kim, and S. Yang, “Catadioptric panoramic lens with a rectilinear projection scheme”, Journal of the Korean Physical Society, 48, 554-563 (2006).    [reference 8] G. Kweon, Y. Choi, G. Kim, and S. Yang, “Extraction of perspectively normal images from video sequences obtained using a catadioptric panoramic lens with the rectilinear projection scheme”, Technical Proceedings of the 10th World Multi-Conference on Systemics, Cybernetics, and Informatics, 67-75 (Orlando, Fla., USA, June, 2006).    [reference 9] H. L. Martin and D. P. Kuban, “System for omnidirectional image viewing at a remote location without the transmission of control signals to select viewing parameters”, U.S. Pat. No. 5,384,588, date of patent Jan. 24, 1995.    [reference 10] F. Oxaal, “Method and apparatus for performing perspective transformation on visible stimuli”, U.S. Pat. No. 5,684,937, date of patent Nov. 4, 1997.    [reference 11] F. Oxaal, “Method for generating and interactively viewing spherical image data”, U.S. Pat. No. 6,271,853, date of patent Aug. 7, 2001.    [reference 12] N. L. Max, “Computer graphics distortion for IMAX and OMNIMAX projection”, Proc. NICOGRAPH, 137-159 (1983).    [reference 13] P. D. Bourke, “Synthetic stereoscopic panoramic images”, Lecture Notes in Computer Graphics (LNCS), Springer, 4270, 147-155 (2006).    [reference 14] G. Kweon and M. Laikin, “Fisheye lens”, Korean patent application 10-2008-0030184, date of filing Apr. 1, 2008.    [reference 15] G. Kweon and M. Laikin, “Wide-angle lenses”, Korean patent 10-0826571, date of patent Apr. 24, 2008.