Image processing algorithms that identify or classify an object present in an image from an automated vehicle mounted camera based on the shape and apparent size of the object are known. However, it is difficult to measure distance to the object because of critical alignment necessary to make a distance measurement. Contrariwise, an automated vehicle radar-sensor can readily determine the distance to an object, but it is difficult to identify or classify an object based solely on an analysis of a reflected radar signal. It is known to ‘fuse’ or combine object detection data from different types of object detectors (e.g. camera, radar, lidar) to take advantage of the strengths of one type of sensor in order to compensate for the weaknesses of another type of sensor. However, the amount of data that the fusion process generates can undesirably increase the cost and complexity of the computing hardware that performs the fusion process.