Locating and/or identifying objects in an environment using electronic means such as light detection and ranging (LIDAR) sensors, cameras, radar or other sensors can be a complex task. For example, the sensors may not perceive aspects of the surroundings that are beyond a particular distance (e.g., sensing range of the sensors). Moreover, a field-of-view of the noted sensors can be obstructed by objects within the surrounding environment such as buildings, trees, and other vehicles causing the sensors to potentially fail to detect partially occluded objects in the obstructed areas. Additionally, particular objects may generally be more difficult to detect because of associated shapes and/or particular poses. For example, systems may encounter difficulties detecting bicycles because of the general open design of bicycle frames and minimal front and rear profiles.
Moreover, when the scanning vehicle is operating in an autonomous mode, the vehicle uses the sensors to build an obstacle map of objects in the surrounding environment that facilitates avoiding objects within the surrounding environment. However, because some objects may be undetected due to being partially obstructed or otherwise difficult to detect, the obstacle map may not provide a complete perception of the surrounding objects. As a result, the vehicle may encounter unforeseen obstacles causing erratic maneuvers or other undesirable effects.