As three-dimensional (“3D”) display devices become more ubiquitous in consumer electronics (e.g., liquid crystal display screens, plasma screens, cellular phones, etc.), generating 3D content for display on the consumer electronics becomes a growing area of research and development. As such, various real-time two-dimensional (“2D”) to 3D image conversion technologies have been developed to obtain 3D content from existing 2D video content sources, such as DVD, Blu-Ray, and over-the-air broadcasting. However, the current technologies are not acceptable for long term usage due to their high computational complexity and/or unsatisfactory image quality.
Current techniques for 2D to 3D video conversion use frame to frame movement obtained from video content analysis to reconstruct 3D objects. Furthermore, motion vectors can be further combined with other disclosed techniques such as linear perspective and color-based segmentation to obtain a qualitative depth map. However, calculating motion vectors and semantic video content analysis significantly increase computational complexity. For the foregoing reasons, there is a need for new methods and apparatuses for 2D to 3D image conversion having less computational complexity and implementations costs.