Recent advancements in the field of video surveillance systems, machine vision systems in the field of robotics and automotive industry, and consumer electronic (CE) devices are largely due to rapid technological development in image processing techniques. Although various object segmentation methods have been known to separate foreground objects from background of an image, the complexity, accuracy, and computational resource requirement varies based on an objective to be achieved. In depth-based object segmentation methods, the use of a depth map for an object segmentation may allow avoidance of many uncertainties in the object delineation process, as compared methods that use a color image alone. Existing depth sensors that provide depth map are still lacking in accuracy and lag to match up with the increasing resolution of RGB cameras. For example, the depth map may contain shadowy areas, where the light from infrared (IR) emitters of depth sensors do not propagate, resulting in areas with unknown depth. In addition, the depth map may be most uncertain at the boundary of an object, where the depth drops sharply, and strongly fluctuates between image frames. The imperfectness in the depth map of modern depth sensors results in significant fluctuations on the boundary of a segmented object, especially visible between frames of a sequence of image frames, for example, a movie or other videos. The resulting artifacts are visually unpleasant to a viewer. Therefore, it may be desirable to reduce the amount of boundary fluctuation and stabilize the object boundary for precise object segmentation and enhanced background substitution.
Further limitations and disadvantages of conventional and traditional approaches will become apparent to one of skill in the art, through comparison of described systems with some aspects of the present disclosure, as set forth in the remainder of the present application and with reference to the drawings.