Typical stereo rendering involves computing a dense optical flow field between pairs of cameras, and then interpolating a viewpoint over the entire 3D image. This is difficult and even might be considered impossible in some cases, such as semi-transparent objects. Even for normal solid objects, this is difficult because most optical flow algorithms are too slow to be done in real-time. In other words, interpolating 3D images from captured 2D images can be processor intensive. As a result, generating 3D images and/or 3D video in real time to accomplish a desired playback user experience can be difficult. Therefore, it is desirable to render 3D images and/or 3D video without optical flow interpolation in real time and/or as the image or video is streamed.