In machine vision systems (also termed herein “vision systems”), one or more cameras are used to perform vision system process on an object or surface within an imaged scene. These processes can include inspection, decoding of symbology, alignment and a variety of other automated tasks. More particularly, a vision system can be used to inspect a flat work piece passing through an imaged scene. The scene is typically imaged by one or more vision system cameras that can include internal or external vision system processors that operate associated vision system processes to generate results.
In various manufacturing processes, it is desired to align one flat work piece with respect to another work piece. More particularly, in assembly applications, one independent work piece is aligned to another independent work piece in a process that entails moving an overlying first work piece into a position in which it hovers over a second work piece and then is lowered into place. One exemplary process entails inserting the cover glass of a cellular telephone or a tablet computer into its housing. Another exemplary process can entail lowering a window glass into a window frame or placing a circuit chip onto a circuit board. In such manufacturing processes, work pieces must be aligned along the x and y translation directions of a reference plane, along with rotation (Θ) within the x-y plane, and they are lowered along the orthogonal z-axis into final engagement. This is accomplished using a robot manipulator and/or motion stages that grasp(s) the first work piece and uses feedback from the vision system to align it with the second work piece. While the use of a three-dimensional (3D) vision system can be employed in such processes, it is contemplated that 2D vision systems can perform adequately where the x-y planes of the two work pieces remain parallel at all elevations/heights (along the z axis).
In many applications, after seating the first work piece within the second work piece, the alignment accuracy between them is measured by the gap between the outer edge of the first work piece and the inner edge of the second work piece. Ideally, these edges should be the primary alignment features for aligning two such objects as they are assembled together by a robot manipulator and/or motion stage(s). However, as noted above, before the first work piece is fully seated into the second work piece, the manipulator/motion stage causes the first work piece to hover above the second, and the first work piece may occlude the second work piece's inner edges from the camera's view or from the illumination source. Thus, it is often challenging to simultaneously view the critical alignment features in each work piece, which must be accurately aligned relative to each other, as the manipulator and/or motion stage is moved to complete the assembly process.
It is therefore desirable to provide a technique for reducing or eliminating uncertainties in the alignment process where a first work piece partially occludes critical alignment features of a second, underlying work piece.