Existing surround-view or 360° view camera systems gather images captured by cameras positioned at various locations around the vehicle and generate a live view of the vehicle's surroundings that is displayed on a vehicle display for the vehicle operator to see. These systems may apply image processing techniques to the images captured by each camera at a given point in time to generate the live view. For example, the image processing techniques may include image registration or other techniques for identifying common features between the camera images, aligning the images according to the common features, and combining or stitching the images together to create a panoramic view of the vehicle's surroundings. Existing image processing techniques also include reducing image distortion, overlaying information onto the images, and/or equalizing image characteristics, such as brightness or contrast, to create a more homogeneous panoramic image.
Existing vehicle camera systems also use image processing to change a perspective of the captured images before and/or after combining the images to create a panoramic or surround-view image, and before displaying the resulting image on the vehicle display as part of a live video feed of the vehicle's surroundings. For example, each of the cameras may be positioned on the vehicle to capture images corresponding to a specific field of view relative to the vehicle, such as, e.g., a front view, a left side view, a right side view, and a rear view. Image processing techniques may be used to transform the perspective of the images captured by each camera into, for example, a top view or a top-down view.
For example, FIG. 1A depicts areas surrounding a conventional vehicle 10 that are within a current field of view of an existing 360° view camera system. The camera system may include at least a front camera, a left camera, a right camera, and a rear camera, and each camera may be positioned on the vehicle 10 to capture image data of the vehicle surroundings from a corresponding field of view (e.g., front, left, right, and rear views, respectively). The vehicle camera system may be configured to change a perspective of, and/or apply other image processing techniques to, the images captured by each camera to obtain a corresponding top-down view of the area within the field of view of that camera, such as the front camera top-down view, left camera top-down view, right camera top-down view, and rear-camera top-down view shown in FIG. 1A.
To illustrate, FIG. 1B shows an exemplary top-down image 12 of the vehicle 10 parked in a parking space 13 within a parking lot 14, the image 12 being displayed on a vehicle display 16. The top-down image 12 may be generated by the existing 360° view camera system using image data captured from the front, left, right, and rear cameras on the vehicle 10 and applying the above-described image processing techniques. In particular, the front camera image data can be transformed into images representing a top-down view of a parking lot area that is within a field of view of the front camera, the left camera image data can be transformed into images representing a top-down view of a parking lot area that is within a field of view of the left camera, the right camera image data can be transformed into images representing a top-down view of a parking lot area that is within a field of view of the right camera, and the rear camera image data can be transformed into images representing a top-down view of a parking lot area that is within a field of view of the rear camera.
One drawback of existing 360° view camera systems is that the resulting surround-view images, or the live video feed comprised thereof, are limited to the image data captured within the current field of view of the vehicle cameras. For example, in FIG. 1A, the gray-shaded area represents a region 18 underneath and closely surrounding the vehicle 10 that is excluded from, or outside of, the current field of view. That is, the current field of view for each of the vehicle cameras does not include any portion of the region 18 shown in FIG. 1A. As a result, existing camera systems do not have the ability to present image data representing the region 18, or other regions outside the system's current field of view, and may depict these regions as unidentifiable, gray areas within the surround-view image 12, as shown in FIG. 1B.
Accordingly, there is still a need in the art for vehicle camera systems and methods that can provide image data or other information about one or more regions that are outside a current field of view of the vehicle cameras.