The present application finds particular application in panoramic vehicle imaging systems. However, it will be appreciated that the described techniques may also find application in other vehicle monitoring systems, other imaging systems, or other vehicle safety systems.
Conventional approaches to determining articulation angle between a tractor and trailer use a plurality of sensors on the tractor to determine the articulation angle of the trailer. One such approach requires the hitch point to be visible to a camera mounted on the tractor, and trailer tongue length to be measured or known. Additionally, this approach does not use any view of the tractor itself to determine the articulation angle. Rather, a camera, mounted on the driver's outside mirror, must be able see the wheel. Moreover, in cars, the camera to wheel distance is relatively small. Such an approach is sensitive to weather (rain interferes with the view of the wheel), lighting (low sun, wet road, night . . . ), and is not suited to the larger distances found in tractor trailer arrangements, in which a wheel may be represented by only a few pixels. At such low resolution, angular measurement quality is detrimentally affected.
Another conventional approach employs a camera on the trailer for viewing behind the trailer, wherein the rearward-looking images are used in determining “optical flow” in relation to the movement of the trailer. Optical flow describes the way objects and/or points move, and is difficult to calculate reliably and cheaply. Moreover, this approach does not consider a camera facing the tractor and can only infer, indirectly and at great computational expense, an articulation angle.
The present innovation provides new and improved systems and methods that facilitate directly computing an articulation angle between first and second articulated portions of an articulating vehicle from camera images of the first or second portion of the vehicle, which overcome the above-referenced problems and others.