Various images, including two-dimensional (2D) and three-dimensional (3D) images as may be captured by motion or still image cameras, tomographic imaging devices, magnetic imaging devices, seismic imaging devices, etc., are commonly used in a number of fields, including manufacturing, medical, security, energy, and construction. For example, such images may be used for quality control analysis, medical diagnosis, facial recognition, geological exploration, component stress and load analysis, and/or other image based applications.
Processor-based (e.g., computer-based) processing of such images may be utilized, such as for providing machine vision, object recognition, edge detection, depth mapping, etc., in providing various image based applications. The images used in such processor-based processing may comprise relatively large data sets. For example, a point cloud representing a 3D image of an object or objects may be appreciably large, such as on the order of megabytes or gigabytes. Accordingly, techniques for extracting relevant features from images (e.g., edges of objects represented within an image) have often been used to both reduce the size of the image data and to provide an image representation having the features therein presented in a manner facilitating image processing for one or more image based application.
A number of techniques have been used to identify edges represented within a point cloud for an image. For example, the techniques described in Bendels, Gerhard H., Ruwen Schnabel, and Reinhard Klein, “Detecting Holes in Point Set Surfaces,” WSCG 2006 International Programme Committee (2006); Hackel, Timo, Jan D. Wegner, and Konrad Schindler, “Contour detection in unstructured 3d point clouds,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (2016); Boulaassal, H., T. Landes, and P. Grussenmeyer, “Automatic extraction of planar clusters and their contours on building façades recorded by terrestrial laser scanner,” International Journal of Architectural Computing 7.1, pp 1-20, (2009); and Ni, Huan, et al., “Edge detection and feature line tracing in 3d-point clouds by analyzing geometric properties of neighborhoods,” Remote Sensing 8.9, p. 710, (2016), the disclosures of which are incorporated herein by reference, detect edges within the images using edgel (i.e., an edgel is a pixel in an image or a voxel in a 3D point cloud that is recognized as the edge of something represented in the image) detection algorithms. These techniques operate to extract edgel points in 3D point clouds, thus providing an image rendition (referred to herein as object edge image representation) having an amount of data that is significantly reduced as compared to the original point cloud but which nevertheless preserves the main features of the objects represented in the images.
Such edgel image processing techniques, however, generally require appreciable processing time and resources to produce object edge image representations. Accordingly, extraction of the main features of objection within large or complex images using typical existing edgel image processing techniques may require an unacceptable amount of time and/or processing resources, and may not even be possible or practical in certain situations or using particular processor-based systems. For example, although an object edge image representation may facilitate particular image based applications, such as object recognition and machine vision, the time required to produce object image representations using traditional edgel image processing techniques may make it impractical to implement in real time (e.g., for object identification in real-time moving images). Moreover, the processing resources required to produce object image representations using traditional edgel image processing techniques may make it impractical to implement on the systems available in any particular situation (e.g., robotic pick and place systems deployed in high speed sorting or assembly lines).