Locating objects in an environment using electronic means such as light detection and ranging (LIDAR) sensors, cameras, radar or other sensors can be a complex task. For example, the sensors may not perceive aspects of the surroundings that are beyond a particular distance (e.g., sensing range of the sensors). Moreover, a field-of-view of the noted sensors can be obstructed by objects within the surrounding environment such as buildings, trees, and other vehicles causing the sensors to potentially fail to detect occluded objects in the obstructed areas.
Moreover, when the scanning vehicle is operating in an autonomous mode, the vehicle uses the sensors to build a map of the surrounding environment including various features (e.g., objects, terrain, etc.) that facilitates avoiding objects within the surrounding environment, navigating, and so on. However, because some objects may be undetected due to the obstructed areas or because of other factors, the map may not provide a complete representation of the surrounding environment. As a result, the vehicle may encounter unforeseen obstacles causing erratic maneuvers or other undesirable effects.