With an ever growing desire to see more information with better quality, large-screen displays have become quite popular. Increasing screen resolutions and sizes are continually emerging and made available in televisions, computer monitors, and other video devices. Until recently, large screen displays were typically too costly, physically unwieldy, or simply unavailable. Video projectors provided one solution, enabling a wide range of communication and entertainment functions by offering a significantly greater display area relatively inexpensively. These devices have found application in conference rooms for presentations, home theaters, classroom training, and advertising billboard displays.
Similar to other video device technologies, video projectors continue to advance their displayable pixel resolution and light output. Today commodity projectors are brighter, offer better quality, and are often less expensive than those of prior years. Highly portable projectors (in both weight and size) are also becoming readily available. No longer do commodity projectors remain constrained to dimly lit rooms with well prepared display surfaces. A video projector's small physical size relative to its large projection output therefore remains appealing.
Even with these improvements, however, it is still difficult or impossible for a single commodity projector to achieve very high resolutions, project over vast areas, or create bright projections on very bright surface areas (for example, near day lit windows). Applications demanding such display qualities, however, are becoming more desirable. The benefits of increased resolution, brightness, and larger display surface area have proven useful for reaching larger audiences and providing full-scale life-sized immersive environments. Unfortunately, construction of such large displays is complex and costly.
One common technique, such as grouping multiple projectors together and tiling their projection output to produce large screen displays of any desired size, presents challenging problems with registration (that is, alignment of projector pixels). Color and luminosity variance across separate devices and even within a given device is difficult to correct. Minor shape or geometric inconsistencies of the display surface can also hinder adequate results. Projector lumens, or light output, may not be adequate for brighter locations. Synchronizing content delivery to the individual components forming the larger display are additional hurdles to solve. Some of these problems apply to single-projector displays as well.
Solutions to some of these system problems take many forms. Many require precise pixel and color alignment using manual methods that require physical adjustment of the projector placement. If the output pixels from one projector are not close enough to those from another projector, a visible gap may occur between the projections on the composite display. Likewise, overlapping pixels across projectors produce bright seams that are also objectionable. High-end projectors with specialized lens optics or edge blending/blurring filters may be available to reduce some of these problems, but are far from optimal.
Specialized projectors and mounting hardware, measurement tools, and tedious calibration methods are additional requirements that add to the resource costs and complexities of physical projector alignment which can become too demanding for the average user. The advanced skills and time requirements are more than most will invest. In many configurations, physical alignment may even be impossible using projectors with limited optic pathways, or with even slightly irregular display surfaces. When changes are necessary to replace failed lamps, the calibration methods often need repeating.
What is needed is a system that provides an easy calibration and playback mechanism offering the typical user an automated method to create a composite display from one or more commodity projectors, even in the presence of high ambient light levels. This method should offer a relatively quick one-time calibration function to be performed once after casual projector placement or changes. Commonly owned U.S. patent application Ser. No. 12/728,838, entitled “Multi-projector display system calibration” describes such a one-time calibration system.
Digital still cameras are often used by multi-projector display systems (such as the one described in U.S. patent application Ser. No. 12/728,838) to gather input for a calibration process. These cameras can capture high-resolution images of the environment for detailed measurement and analysis. Exposure settings may be accurately configured and the capture event (exposure) can be directly initiated or otherwise controlled by remote shutter signaling or the like. The moment of exposure is thus deterministic (or occurring after shutter signaling) and the single received frame result marks the end of a capture interval. Therefore, a process requiring many exposures of varied content can be easily and optimally automated, as in a loop: prepare a content, capture a content, receive captured result, repeat.
Digital still cameras, however, may exhibit slow recycle times between exposures. Recycle time may be slowed by internal processing of high-resolution image content or transfer and data storage of large frame data. This can diminish the performance of a calibration capture function in a calibration processing step. Additionally, it decreases the capability to quickly position a camera properly, toward a calibration display, as the time to gather multiple frames and determine an acceptable orientation of the camera's view is lengthened.
In a multi-projector system (such as the one described in U.S. patent application Ser. No. 12/728,838), an inexpensive webcam is used instead of a more costly digital still camera. Higher frame rates often enable user interfaces with video feedback to aide positioning of the camera and aligning its view with a display surface. Lower device resolutions may lead to diminished storage requirements. Because the video frame rate can be fast, it is possible to shorten the time for a calibration test pattern capture process. However, using a video device to capture calibration content presents several challenges. With a webcam, there is generally less control of exposure settings and exposure notification compared with a still camera. Limited methods to adjust an aspect of an exposure sensitivity setting may indirectly change a video frame rate or a noise characteristic. Capture methods tend to require more complex interfaces like asynchronous delivery of a continuous sequence, or stream, of captured frames.
Some video cameras and webcams are configurable to capture still images by using internal resolution enhancement or frame averaging methods to deliver higher resolution frames. If the quality and performance is acceptable for a calibration process, then the still image interfaces may be used.
However, when video devices are used and a contiguous stream of captured frames is arriving at a calibration process, one key challenge is determining when a delivered frame will contain a known element captured by the device. Additionally, it is difficult to determine when a change to a known element for capture by the device can be made without impacting a previously captured element in an as yet to be delivered frame.
For example, a calibration process requires that views of several calibration test patterns be captured and stored for future processing. The process begins with outputting a calibration test pattern onto a display surface. With a still camera, a signaling event sent after the calibration test pattern is displayed begins the capture operation. After the exposure event (when the camera captured content is determined to be complete through notification, detection of delivery to a storage media, transmission from the camera to the calibration process, or a known exposure interval), the process continues for a next calibration test pattern. With a webcam, there is no exposure signaling. The video capture process is started (or already active) and it is initially unknown which frame in a delivery sequence contains the expected capture of the calibration test pattern.
Contributing to the delivery delay and uncertainty are variables including time to enable a video capture process, frame buffers in the device or in a driver, data compression or decompression, transmission time, slow gain or frame averaging methods, and other delays between the actual frame capture event and the location of frame access, etc. It is equally unclear when to display a next calibration test pattern as changes to display content mid-frame (i.e. at the interval of video frame capture), can yield unexpected results within the frame.
Accurate frame rate and timestamp information may not be available from the webcam device. Even if a frame rate is well controlled and identified, it alone is insufficient to determine the frame delay between a capture time and a delivery time or the time between a signaling to start and the actual start of a capture stream. Frame timestamps, if available and associated with a capture process clock, can help to determine the delivery delay, but cannot be used alone to determine how quickly capture content can be safely changed.