A number of systems and programs are offered on the market for the design, the engineering and the manufacturing of objects. CAD is an acronym for Computer-Aided Design, e.g. it relates to software solutions for designing an object. CAE is an acronym for Computer-Aided Engineering, e.g. it relates to software solutions for simulating the physical behavior of a future product. CAM is an acronym for Computer-Aided Manufacturing, e.g. it relates to software solutions for defining manufacturing processes and operations. In such computer-aided design systems, the graphical user interface plays an important role as regards the efficiency of the technique. These techniques may be embedded within Product Lifecycle Management (PLM) systems. PLM refers to a business strategy that helps companies to share product data, apply common processes, and leverage corporate knowledge for the development of products from conception to the end of their life, across the concept of extended enterprise. The PLM solutions provided by Dassault Systèmes (under the trademarks CATIA, ENOVIA and DELMIA) provide an Engineering Hub, which organizes product engineering knowledge, a Manufacturing Hub, which manages manufacturing engineering knowledge, and an Enterprise Hub which enables enterprise integrations and connections into both the Engineering and Manufacturing Hubs. All together the system delivers an open object model linking products, processes, resources to enable dynamic, knowledge-based product creation and decision support that drives optimized product definition, manufacturing preparation, production and service.
In this context, particle-based applications (e.g. point cloud rendering, fluid simulation with SPH—such as smoothed-particle hydrodynamics) or 3D modeling can benefit from data compression. Thus, methods have been developed to perform data compression applied to a large number of particles spread in a 3D (or 2D) space. Such prior art is mainly covered by two fields of research. The first field is geometry driven compression of 3D (or 2D) meshes. The second field is compression of point clouds.
The first field (mesh compression) is now discussed.
A very large number of methods have been proposed to compress 3D meshes. A 3D mesh usually consists of geometry data (e.g. position of each vertex of the mesh in a space), connectivity data (e.g. the incidence relations between vertices, i.e. how vertices are linked to form polygonal faces), and optionally per-vertex and/or per-face attributes (i.e. value of at least one physical attribute associated to the vertex/face position, e.g. vertex/face properties useful for the application, e.g. colors, normal vectors, and/or texture coordinates). A relatively comprehensive taxonomy of these methods can be found in the paper “3D mesh compression: survey, comparisons and emerging trends” by Maglo et al., 2013. One can basically identify two main branches.
The first branch relates to connectivity compression methods. These methods mainly use connectivity data to encode both connectivity data and the vertex information. The algorithms Edgebreaker (by Rossignac), Layered decomposition (by Bajaj) and Spanning trees (by Taubin and Rossignac) are good examples. These methods achieve good results, because connected vertices have close positions and attribute values. This can be exploited favorably by the compression scheme: one can predict a vertex position/attribute value if one has already decoded those connected to it (e.g. so-called “parallelogram prediction” scheme in the Edgebreaker algorithm).
The second branch relates to geometry-driven compression methods. In this branch, geometry data (and other vertex attributes) are compressed first without regard to the connectivity. Connectivity is compressed afterwards by using the geometry data. The idea of such methods is that connectivity data represent only a small part of the data compared to vertex position/other attributes in very large 3D meshes. Such methods thus believe it more relevant to handle geometry data more carefully. Most state-of-the-art geometry-driven compression schemes are based on space partitioning. That means vertices are partitioned in hierarchical structures like BSP, K-d tree or octree (quadtree in 2D). Few examples of methods that use a K-d tree to encode vertex positions include: the paper “GEn-code: Geometry-driven compression for General Meshes”, by Lewiner & al. (2006), and the paper “Progressive lossless compression of arbitrary simplicial complexes”, by Gandoin a Devilliers (2002). Examples of methods that use an octree to encode vertex positions include: the paper “Geometry-guided progressive lossless 3D mesh coding with octree decomposition”, by Peng & al. (2005), the paper “Adaptive coding of generic 3D triangular meshes based on octree decomposition”, by Tian & al. (2012), the paper “CHuMI viewer: Compressive huge mesh interactive viewer”, by Jamin & al. (2009), and the paper “Out-of-Core Progressive Lossless Compression and Selective Decompression of Large Triangle Meshes”, by Dii & al, (2009).
The second field (point cloud compression) is now discussed.
Point clouds only include geometry data (vertex positions) and per-vertex attributes (e.g. a color and/or a normal vector per vertex). Point clouds often consist of very large data and compression is therefore a critical matter. State-of-the-art compression methods are also based on space partitioning: this provides good results and sometimes allows vertex random access (another matter of interest for point clouds). The following methods are all based on octree decomposition: the paper “Efficient high quality rendering of point sampled geometry”, by Botsch & al., the paper “Octree-based Point-Cloud Compression”, by Schnabel & Klein, the paper “A Generic Scheme for Progressive Point Cloud Coding”, by Huang & al., the paper “Octree-Based Progressive Geometry Coding of Point Clouds”, by Huang & al., the paper “Tangent-plane-continuity maximization based 3d point compression”, by Julang & al., the paper “Real-time compression of point cloud streams”, Kammeri & al, and the paper “Point cloud attribute compression with graph transform”, by Zha ng & al.
Most of these methods consist of two steps. First, partitioning space in an octree so that each leaf contains 0 or 1 (or a few) vertex(ices) and each leaf containing a vertex has small dimensions. Thus knowing the position of the cell in the tree gives enough precision to locate the vertex in the 3D space. Then, a method to encode efficiently the tree structure and whose leaves are empty or not. This information is enough to recover vertex positions.
Whether they lie in the first field or in the second field, the compression methods listed above spare no effort or ingenuity to increase efficiency. Efficiency can be assessed with any one or any tradeoff of the following criteria: compression ratio, compression and decompression time, progressiveness, data random access, and temporal correlation. Compression ratio is the ratio between the size of the data after compression and before compression. It is a concrete, quantified measure and usually the first way to assess efficiency. Compression and decompression time are also solid and quantified measures for assessing efficiency. Decompression time is often more relevant, as real 3D applications need to load and to decompress data in real-time, but such data might have been compressed offline (once and for all). Memory consumption might also be taken into account as it is strongly related to the question of whether it can work in real-time. Progressiveness is the ability to load the data partially and to get something already useable. The partially loaded data may be displayed as a coarse version of the final result, and further loading only adds smaller (and smaller) details. Data random access is the ability to load a small well-located part of the data without having to read/load other parts (or in a minimum of time). Temporal correlation, if the data is animated over time, is the ability to exploit correlation between frames (i.e. state of the data at different times) to further compress the data.
Despite their many efforts, most of the methods listed above heavily focus on the efficiency of coding the 3D point positions instead of their attributes. These attributes are critical in rendering the point cloud/3D model with high quality. The size of these vertex attributes is also significant compared to the size of vertex positions.
Some methods still provide valid solutions, yet not efficient enough:                The paper “A Generic Scheme for Progressive Point Cloud Coding” by Huang & al. teaches to encode colors (a per-vertex attribute) with a linear de-correlation transform followed by adaptive quantization along each transformed axis. While this scheme is fast, it can only achieve coding efficiency similar to an octree-based method.        The paper “Octree-based Point-Cloud Compression” by Huang & al. teaches to encode positions and colors in separate octree structures.        The paper “Point cloud attribute compression with graph transform” by Zhang & al. focuses on compression of vertex attributes and introduces a method based on “graph transform”.        
The main difficulty is that, unlike traditional images and videos where the attributes (e.g. pixel color) lie on a completely regular (e.g. grid like) structure, in this case, attributes lie on an unstructured and/or sparse point cloud, and are thus difficult to compress.
Within this context, there is still a need for an improved way to compress a modeled object that represents a real object.