A. Technical Field
The present invention pertains generally to projector-camera systems, and relates more particularly to adaptive projector display systems.
B. Background of the Invention
The increasing prevalence of multimedia systems, such as computer systems, gaming system, videoconference systems, projector systems, and home theater systems, has resulted in projector display systems operating within a wide variety of conditions. Adaptive projector display systems have been developed to address projection under various conditions. For example, research into adaptive projector display systems have attempted to find ways to correct for color distortions, display surface distortions, and other calibration issues. The research in this area has resulted in methods that improve the robustness of the projector systems.
As these systems are increasing being used by average consumers who are unfamiliar with projection technology and calibration techniques, it is beneficial to develop calibration and correction methods that require little or no user input. There is a sizable body of literature related to adaptive projector displays. Accordingly, it would be impossible to summarize all of the prior attempts. Rather, presented below are some approaches to calibration that involve little or no user interaction.
Raij and Pollefeys proposed an automatic method for defining the display area on a plane, removing the need for physical fiducials and measurement of the area defined by them. Planar auto-calibration can be used to determine the intrinsics of an array of projectors projecting on a single plane. The camera, projectors, and display plane are then reconstructed using a relative pose estimation technique for planar scenes. Raij and Pollefeys describe their technique in “Auto-Calibration of Multi-Projector Display Walls,” In Proc. Int'l. Conf on Pattern Recognition (ICPR), Volume I, pages 14-17, 2004, which is incorporated herein by reference in its entirety.
Raskar and others investigated how to use projectors in a flexible way. Their basic display unit is a projector with sensors, computation, and networking capability. It can create a seamless display that adapts to the surfaces or objects on which it is projecting. Display surfaces with complex geometries, such as a curved surface, can be handled. Their technique is described in R. Raskar, M. S. Brown, R. Yang, W. C. Chen, G. Welch, H. Towles, B. Seales, and H. Fuchs, “Multi-projector displays using camera-based registration,” In VIS '99: Proceedings of the conference on Visualization '99, pages 161-168, Los Alamitos, Calif., USA, 1999 (IEEE Computer Society Press), which is incorporated herein by reference in its entirety.
Yang and Welch disclose using features in the imagery being projected for matching between a pre-calibrated projector and camera to automatically determine the geometry of the display surface. One issues with this approach, however, it that the estimation algorithm works in an iterative manner and is not suitable for continuous correction in real time. Yang and Welch discuss their technique in R. Yang and G. Welch, “Automatic projector display surface estimation using every-day imagery,” Proc. Ninth International Conference in Central Europe on Computer Graphics, Visualization, and Computer Vision, 2001, which is incorporated herein by reference in its entirety.
Instead of matching features across images, there are active techniques where calibration aids are embedded into user imagery. For instance, D. Cotting and others discussed embedding imperceptible calibration patterns into the projected images. The approach takes advantage of the micro-mirror flip sequence in Digital Light Processing (DLP) projectors and slightly modifies the per-pixel intensity to let the synchronized camera capture the desired pattern. These approaches can be found in D. Cotting, M. Naef, M. Gross, and H. Fuchs, “Embedding Imperceptible Patterns Into Projected Images For Simultaneous Acquisition And Display,” ISMAR '04: Proceedings of the 3rd IEEE/ACM International Symposium on Mixed and Augmented Reality, pages 100-109, Washington, D.C., USA, 2004 (IEEE Computer Society); and D. Cotting, R. Ziegler, M. Gross, and H. Fuchs, “Adaptive Instant Displays: Continuously Calibrated Projections Using Per-Pixel Light Control,” Proceedings of Eurographics 2005, Eurographics Association, pages 705-714, 2005 (Dublin, Ireland, Aug. 29-Sep. 2, 2005), each of which is incorporated herein by reference in its entirety. However, one major drawback of such an approach is that it requires a portion of the projector's dynamic range to be sacrificed, which will, in turn, cause a degradation of the imagery being projected.
One approach demonstrated the ability to calibrate a projector on an arbitrary display surface without modifying the projected imagery. This approach was disclosed by T. Johnson and H. Fuchs in “Real-Time Projector Tracking on Complex Geometry using Ordinary Imagery,” In Proc. of IEEE International Workshop on Projector-Camera Systems (ProCams) (2007), which is incorporated herein by reference in its entirety. This approach employed a calibrated stereo camera pair to first reconstruct the surface by observing structured light pattern provided by the projector. The approach also assumed the surface to be piecewise planar and used RANSAC for fitting a more precise geometric description of the displaying surface. By matching features between the user image stored in frame buffer and the projected image captured by a stationary camera, the approach re-estimates the pose of the projector.
Most of these techniques assume a fixed viewing point, and they typically employ a stereo camera pair for reconstructing and tracking of the projector with constant intrinsic projection matrix. While these methods offer some advantages over prior display options, the system calibration is often a tedious undertaking. Moreover, re-calibration is required to render for new viewing positions.