The ability to capture, store, and reconstruct images of large three-dimensional (3D), real-world environments is becoming increasingly important in a variety of applications, such as interactive walkthroughs, telepresence, and educational tourism. In recent years, the field of image-based rendering (IBR) has attempted to find solutions that enable these applications by capturing large collections of images (i.e., thousands of images). These images are re-sampled to create photorealistic novel views of the environment without necessarily reconstructing a detailed 3D model or simulating global illumination.
For interactive walkthroughs, which are computer graphics applications where an observer moves within a virtual environment, image access cannot be limited to coincide with the viewpoints or paths along which images are captured, but instead requires access along arbitrary contiguous viewpoint paths through the environment. Furthermore, disk-to-memory bandwidth limitations require algorithms that reduce both the size of the images on disk and the amount of data that must be transferred to main memory as an observer navigates through a captured 3D environment.
Image compression techniques, such as JPEG (Joint Photographic Experts Group), two-dimensional (2D) wavelets, and JPEG2000, exploit intra-image redundancy to reduce image size but they do not take advantage of inter-image redundancy. Video compression techniques, such as MPEG (Moving Picture Experts Group), use complex motion-estimation algorithms to exploit inter-image coherence, achieving a significant improvement in overall compression performance. However, motion-estimation is intended for linear sequences of images, making it ill-suited for image access along arbitrary viewpoint paths.
Thus, there exists a need for techniques that overcome the above-mentioned drawbacks by providing improved techniques for compressing and decompressing images of real-world environments.