1. Field
Embodiments of the present invention relate generally to creation of three-dimensional (“3D”) models. More particularly, embodiments of the present invention relate to methods and techniques of interactively extruding two-dimensional (“2D”) pixel-based images into polygon-based 3D models.
2. Description of the Related Art
While three-dimensional (3D) content has certainly managed to become accessible in most households in form of game consoles and personal computers, it largely remains an artifact that is consumed but not produced by end-users. Nowadays end-user development is quickly advancing and end-users employ all kinds of authoring tools such as word processors and slide presentation tools to author their own content. 3D content is lagging behind not because of a hardware challenge—on the contrary, even the cheapest Personal Computers (PCs) now feature amazing 3D rendering capabilities—but because 3D authoring tools are mostly geared towards professional developers with proper training, time and motivation. Most 3D authoring tools use compositional approaches that have grown out of Computer Automated Design (CAD) software packages by conceptually extruding two-dimensional (2D) authoring approaches to 3D. Instead of composing 2D models from rectangles and circles, users construct 3D models from cubes and spheres. While certainly feasible for professional users, these approaches can be inefficient and awkward in the hands of computer end-users trying to assemble irregular shaped models.
Sketching approaches are highly promising for the 3D end-user development. Sketching, often using pen-based interfaces and gestures, allows user to directly draw or annotate shapes. One line of work uses domain semantics to disambiguate sketches. One example, the Electronic Cocktail Napkin system, interprets pen drawn sketches to create diagrams with semantics by matching sketches against a large and expandable set of graphical primitives with user defined semantics. Digital Clay is another example which not only recognizes sketches, but can also construct appropriate 3D models. Sketch VR recognizes 2D geometric shapes that it can project into 3D architectural spaces. Gesture-based interfaces have also been used to create 3D models of mechanical designs. In CADICAM applications models are built by gradually attaching facets to 3D structures.
Freestyle sketching approaches do not rely on domain knowledge, but use sketching to create arbitrary 3D objects that are not interpreted semantically by the computer. Teddy is a sketching interface to create all kinds of sophisticated 3D models that “have a hand-crafted feel (such as sculptures and stuffed animals) which is difficult to accomplish with most conventional modelers.” Teddy's inflation mechanism is based on closed regions. It does not use image information to generate models nor does it include texturing tools. Models created with Teddy do not include skins and need to be textured with third party painting applications.
A number of image-based extrusion approaches add three-dimensional looking effects to two-dimensional images without creating tree dimensional models. William's automatic airbrush creates compelling three-dimensional illustrations out of images by applying complex shading functions to selected regions. Simpler versions of this idea are found in popular paint programs. For instance, the bevel function in Photoshop is often used to create three-dimensional looking versions of two-dimensional shapes such as 3D buttons. However, the results of these algorithms remain two-dimensional shapes with no user-accessible 3D model information.
Hence, there exists a need in the art for systems, methods, and techniques offering the ability for interactively extruding 2D pixel-based images into polygon-based 3D models.