1. Field of the Invention
The present invention relates to an image display method and an image display apparatus. In particular, the invention relates to an image display method for displaying a combined image that shows a vehicle observed from a virtual point of view taken above the vehicle. A plurality of cameras capture peripheral images at areas around the vehicle. The peripheral images are combined to form the combined image for display.
2. Description of the Related Art
Known driving support systems may assist a driver to park a car in a garage and/or in a parking space. Known system may display a top-view image, as described in Japanese Patent No. 3,300,334 (“JP '334”). In the top-view image display of JP '334, images are captured at areas surrounding a vehicle using a plurality of cameras. These pictures are hereafter referred to as “peripheral images”. The peripheral images are combined with each other to form a synthesized image that shows the vehicle as observed from a virtual point of view taken above the vehicle. The combined top-view image is stored in a frame memory, and then is read out of the frame memory so as to be displayed on a monitor screen.
As shown in FIGS. 18A and 18B, a top-view image display system of the known system includes a plurality of fish-eye lens cameras 1a, 1b, 1c, and 1d, each of which captures images at corresponding peripheral areas around a vehicle 2. A fish-eye lens camera 1a is mounted at the front of the vehicle 2, fish-eye lens camera 1b is mounted at the left side of the vehicle, fish-eye lens camera 1c is mounted at the right side of the vehicle, and fish-eye lens camera 1d is mounted at the rear of the vehicle.
An image-combination processing unit 3 generates a combined image from the peripheral images captured by the fish-eye lens cameras 1a-1d. The combined image shows the vehicle 2 observed from a certain point over the vehicle, which is a virtual point of view 4. An example of the virtual point of view 4 that is taken as the combination of the peripheral images into a top-view image, is shown together with the vehicle 2 in FIG. 18C. The combined image is displayed on a monitor 5 to assist a driver when parking the vehicle 2.
When combining these peripheral images into one top-view image, the image-combination processing unit 3 uses a mapping table to map an image portion that is captured by each fish-eye lens camera 1a-1d onto the frame memory for display. FIG. 19 is a diagram that schematically illustrates an example of the mapping of fish-eye images IMa, IMb, IMc, and IMd onto a frame memory 6, which corresponds to a display screen. The fish-eye images IMa, IMb, IMc, and IMd are mapped into the corresponding peripheral areas 6a, 6b, 6c, and 6d of the frame memory 6, respectively. An image of the vehicle 2, referred to as a vehicle image 7, is mapped onto the center area of the frame memory 6. The vehicle image 7 is an image that has been taken in advance and is stored as a pre-captured image. Through the mapping of these fish-eye images IMa, IMb, IMc, and IMd, as well as the vehicle image 7, into the corresponding areas of the frame memory 6, a full single-screen top-view image is generated on the frame memory 6.
FIGS. 20A, 20B, and 20C are a set of diagrams that schematically illustrates a mapping process. The fish-eye lens cameras 1a, 1b, 1c, and 1d, capture peripheral-area pictures in front of the vehicle 2, to the left of the vehicle, to the right of the vehicle, and behind the vehicle, respectively. Each of the fish-eye cameras 1a, 1b, 1c, and 1d is capable of taking a picture through its fish-eye lens with a picture area range of 180°.
Specifically, camera 1a can capture an image at a 180-degree area in front of the vehicle 2 beyond the line F-F shown in FIG. 20A, camera 1d can capture an image at a 180-degree area in back of the vehicle beyond the line B-B shown therein, camera 1b can capture an image at a 180-degree area to the left of the vehicle beyond the line L-L, and camera 1c can capture an image at a 180-degree area to the right of the vehicle beyond the line R-R.
However, a distorted pattern is acquired as a result of the capturing a rectangular grid that is drawn on the ground to the left of the vehicle 2 by means of the fish-eye lens camera 1b. An example of the original rectangular grid pattern is shown in FIG. 20B. An example of the distorted pattern is shown in FIG. 20C. The distorted pattern is hereafter referred to as a fish-eye figure.
A fish-eye figure that is captured by each camera is subjected to distortion correction. After the distortion correction, each image is projected onto the corresponding area on the ground. By this means, it is possible to obtain a top-view image. Reference numerals shown in a rectangular grid in FIG. 6 and the fish-eye of FIG. 7 indicate the correspondence of portions therebetween. That is, each area portion that is shown in the rectangular grid FIG. 6 corresponds to an area portion that is shown in the fish-eye FIG. 7 with the same reference numeral. In other words, the area portions 1, 2, 3, 4, 5, and 6 of the rectangular grid FIG. 6 correspond to the area portions 1, 2, 3, 4, 5, and 6 of the fish-eye FIG. 7, respectively. The distortion-corrected images of the corresponding area portions 1-6 of the fish-eye FIG. 7 are stored at positions in the frame memory 6 at which the area portions 1-6 of the rectangular grid FIG. 6 should be stored for the left camera 1b. Images captured by other cameras are stored in the frame memory 6 in the same manner as described above. By this means, it is possible to read the images that were captured by the fish-eye cameras 1a-1d and perform viewing-point conversion to display a figure on a monitor as if it were projected on the ground.
As explained above, in a top-view image display technique of JP '334, a plurality of fish-eye lens cameras acquires peripheral images and then combines, after the correction of distortion thereof, such fish-eye images into a single top-view image that shows a vehicle observed from a virtual point of view taken above the vehicle. A distortion correction value for the generation of a top-view image is typically set based on a certain static state of a vehicle. However, the body of a vehicle may tilt depending on various factors, such as the number of occupants including a driver and passengers, live load, and various driving conditions, such as a curve, acceleration, stop, and the like.
As the vehicle tilts, the “attitude” of a camera changes. The term “attitude” used in the description of this specification means “position” and “orientation” without any limitation thereto. FIGS. 21A and 21B schematically illustrate an example of a change in the attitude of a camera. FIG. 21A shows a pre-change static state in which the vehicle 2 is not tilted. In contrast, in FIG. 21B, the vehicle 2 is tilted by an angle of inclination θ because of the increased number of occupants. As shown in FIG. 21B, the position of the rear camera 1d has lowered due to the tilting of the vehicle 2. A top-view image display system of the JP '334 has a disadvantage in that the position of some part of an image is shifted to the outside of a display area boundary that was set in the process of distortion correction, and thus disappears due to such a change in the attitude of a camera.
In particular, a part of an image disappears at a border between two peripheral images arrayed adjacent to each other for the formation of a top-view image. A top-view image display system of the related art has a disadvantage in that if any object is located in this disappearance area, it is not possible or is very difficult to display the image without any missing portions.
FIGS. 22, 23A, 23B, 24A, 24B, 25, and 26 illustrate how an image partially disappears (appears) because of a change in the attitude of a camera, which occurs in the display system of the related art. FIG. 22 illustrates a plurality of displacement vectors for a plurality of points taken on the distortion-corrected fish-eye images IMa, IMb, IMc, and IMd, which are captured by the fish-eye lens cameras 1a, 1b, 1c, and 1d, respectively according to the related art. The displacement vectors are generated as a result of the application of a certain load that changes the attitude of a camera. In this drawing, the position of each point is denoted by a black circle. Each short line segment that extends from the black circle indicates both the direction of a displacement vector and the magnitude thereof. In FIG. 22, the reference numeral BLR denotes an image borderline between the left image IMb and the rear image IMd. The reference numeral BLF denotes an image borderline between the left image IMb and the front image IMa. The reference numeral BRR denotes an image borderline between the right image IMc and the rear image IMd. Finally, reference numeral BRF denotes an image borderline between the right image IMc and the front image IMa.
As shown in FIG. 22, the directions of displacement vectors significantly differ in the neighborhood of each rear image borderline between two adjacent images in a top-view image of the related art. In other words, the directions of displacement vectors in a border area of one image significantly differ from the directions of displacement vectors in the corresponding border area of another image that is adjacent thereto.
FIG. 23A is an enlarged view that schematically illustrates a border area of the left image IMb and the corresponding border area of the rear image IMd, with the image borderline BLR being formed therebetween according to the related art. Each reference numeral dLT denotes a displacement vector that is generated from a point in the left image IMb in the neighborhood of the image borderline BLR when a certain load that changes the attitude of the rear camera 1d. Each reference numeral dRE denotes a displacement vector that is generated from a point in the rear image IMd in the neighborhood of the image borderline BLR when a certain load that changes the attitude of the rear camera 1d As will be understood from the drawing, an area portion of the left image IMb, which is hereafter denoted as DAAR, disappears as the vehicle 2 is loaded. In addition, as a result of the application of such a load to the vehicle 2, a new rear image area portion APAR appears. An example of this new rear image area portion APAR is shown in FIG. 23B of the related art.
FIGS. 24A and 24B schematically illustrate displacement in a combined image according to the related art. More specifically, FIG. 24A shows a left-rear part of a combined image that is composed of composed the left image IMb and the rear image IMd, before the occurrence of displacement. FIG. 24B shows the left-rear part of a combined image that is composed of the left image IMb and the rear image IMd after the occurrence of displacement.
Before the occurrence of displacement, a ball (BALL) that lies in the neighborhood of the image borderline BLR is displayed as a circle in a faithful manner. However, after the occurrence of displacement, the ball is displayed as a partially hidden circle. The reason is because the directions of displacement vectors significantly differ in the neighborhood of the image borderline BLR. That is, as shown by arrows in the drawing, the directions of displacement vectors in a border area of the left image IMb significantly differ from the directions of displacement vectors in the corresponding border area of the rear image IMd.
FIG. 25 is a diagram that schematically illustrates a top-view image formed before the occurrence of displacement according to the related art. FIG. 26 schematically illustrates an example of a top-view image formed after the occurrence of displacement according to the related art. As will be understood from these drawings, the positions of segments (i.e., line segments) A and B have moved slightly downward after the occurrence of displacement. For this reason, the disappearance area portion DAAR of the left image IMb, which was shown before the occurrence of displacement in the top-view image of FIG. 25, has disappeared after the occurrence of displacement in the top-view image of FIG. 26. On the other hand, the appearance area portion APAR of the rear image IMd, which was not shown before the occurrence of displacement in the top-view image of FIG. 25, has appeared after the occurrence of displacement in the top-view image of FIG. 26.
Other known systems, which displays an object, such as an obstacle, in a faithful and easily viewable manner, is disclosed in Japanese Unexamined Patent Application Publication No. 2007-104373. The image display apparatus of JP 2007-104373 generates a combined image after changing the position of a borderline if any obstacle lies on the original borderline before the change thereof. By this means, JP 2007-104373 discloses generating a combined image without any obstacle being shown on the borderline. However, the above-identified related art is not directed to the prevention of partial disappearance of an image due to a change in the attitude of a camera or to the prevention of partial appearance of an image due to the same cause.
Known top-view image display systems are disadvantageous because some part of an image disappears near an image borderline due to a change in the attitude of a camera, which occurs when a vehicle is loaded. That part of the image, which was not shown before the change, appears due to a change therein. The reason why such partial disappearance of an image and partial appearance thereof occurs, is that the directions of displacement vectors significantly differ in the neighborhood of an image borderline. The displacement vectors are generated at the time when a certain load that changes the attitude of a camera is applied to the vehicle.