Machines such as off-highway haul trucks, motor graders, snow plows, and other types of heavy equipment are used to perform a variety of tasks. Some of these tasks involve carrying or pushing large, awkward, loose, and/or heavy loads up steep inclines or along rough or poorly marked haul roads. Because of the size and momentum of the machines and/or because of poor visibility, these tasks can be difficult for a human operator alone to complete effectively.
To help guide the machines safely and efficiently along the haul roads, some machines are equipped with sensors, for example cameras, located on a front end of each machine. These sensors are often connected to a visual display and/or a guidance system of the machine such that control over machine maneuvering may be enhanced or even automated by two-dimensional images provided by the sensors.
When multiple two-dimensional sensors scan the same region from different positions onboard the machine, differences in the resulting images can be used to determine three-dimensional aspects of the region. That is, objects at different distances from the sensors project images to the sensors that differ in their positions and/or size, giving the depth cue known as disparity. By matching particular features (e.g., pixels, boundary lines, etc.) from the images produced by each sensor, and then comparing the disparity between the matched features, the size, location, and orientation of the matched features in the scanned region can be determined, processed, and used to simulate a three-dimensional environment.
A quality of the simulated three-dimensional environment can be represented by a number of features that are matched between the two images and subsequently used for disparity calculations. This quality parameter is known as a disparity density. When the disparity density is high (i.e., when many of the features from each sensor's image are matched), it can be concluded that both sensors are producing accurate images of the same object or region. When the disparity density is low, it can be concluded that one or both of the sensors are experiencing some kind of impairment. The impairments can include, among other things, rain, snow, dust, fog, debris, etc. When one or both of the sensors are impaired, reliance on the simulated environment for machine control may not be appropriate.
U.S. Patent Publication No. 2009/0180682 (the '682 publication) of Camus published on Jun. 16, 2009 discloses a system and method for ensuring that only good stereo images are processed and used for machine control based on disparity calculations. Specifically, the '682 publication describes capturing images from a left camera and a right camera, and producing a single stereo disparity image from the two captured images. The stereo disparity image is then divided into three parts, including a left third, a center third, and a right third. Each third of the stereo disparity image is then scrutinized to determine a disparity measure representing a quality of the stereo disparity image. To compute the disparity measure, a number of edge discontinuities between adjacent regions in each image third are summed and subtracted from a number of valid image pixels, then divided by a total number of image pixels in the image third. Based on the disparity measure, a disparity algorithm defines the image as valid or invalid. A small number of large cohesive disparity regions will increase the disparity measure, while a larger number of small, fragmented regions will decrease the disparity measure. If the stereo disparity image is determined to be valid (i.e., if the disparity measure falls within a specific threshold), the disparity image is further processed for object and collision detection. However, if the disparity image is determined to be invalid (i.e., if the disparity measure falls outside the threshold value), the disparity image is ignored and new left and right images are obtained from each of the cameras to repeat the process.
Although the method of the '682 publication may help ensure that machine control is not implemented based on images from an impaired camera, the method may do little to improve the image produced by the impaired camera or to affect machine control differently when the camera is impaired. Instead, the system of the '682 publication may simply slow down or stop working altogether when one or both cameras becomes impaired.
The disclosed control system is directed to overcoming one or more of the problems set forth above and/or other problems of the prior art.