1. Field
Exemplary embodiments relate to an image processing method, and more particularly, to an image processing method which determines and depth-unfolds a depth folding region in an input depth image.
2. Description of the Related Art
Currently, information about a three-dimensional (3D) image is widely used in a variety of applications. In general, 3D information includes geometry information and color information.
Geometry information may be obtained using a depth image. A depth image may be indirectly obtained using software called computer vision technology, or directly obtained using a hardware device such as a depth camera.
According to a principle of depth camera, for example, light such as infrared (IR) light is irradiated to an object, and a Time of Flight (TOF) is measured by sensing reflected light to measure a distance (depth) from a depth camera to each part of the object.
In a method of calculating a TOF and depth, a phase contrast between an incident wave and reflected wave is measured to calculate a TOF of a specific light such as IR. However, when a phase contrast exceeds 360 degree, a return driving time may be erroneously calculated.
The above-described phenomenon is referred to as a range folding. Particularly, a phenomenon associated with a depth image obtained by a depth camera is referred to as a depth folding. The depth folding may be overcome by introducing a plurality of light sources with different frequencies in a depth camera. However, such method may increase hardware complexity.