Cinematographers and directors of photography are very concerned about keeping and maintaining, for certain scene sequences, consistency and uniformity in the reproduction of the main subject. This is particularly important when reproducing flesh tones. Also, on many occasions, cinematographers want to apply a special “look” to a given scene, and they must be able to adjust the lighting set-up of a scene such that it is recorded on a film with that particular “look”. Independently of what “look” is intended for a scene, it is just as important to predict how film will “see” that scene. In order to achieve such results, cinematographers use devices called exposure (light, spot) meters, which exist in two kinds: incident light meters and reflected light meters. An incident light meter is placed at the position to be occupied by the principal subject and measures the light coming from all available sources at the set. As the name suggests, it measures how much light is incident on the subject. Reflected light meters, on the other hand, are placed at the position to be occupied by the camera and measure the fraction of the light falling on the subject that reaches the camera. Both types of meters come in a variety of designs that account for different lighting conditions by taking advantage of different light collecting geometries.
Moreover, reflected light meters can be of two basic kinds: one kind measures the average brightness of a scene and another kind, the spot meter, measures the brightness of a specific area of the scene. The averaging meter is only recommendable for certain indoor scenes, particularly cases where the discrepancy between the background and foreground illumination is not too large. Spot meters represent a more sophisticated type of meter. Their usage consists of finding a small area in the scene that can be used as a good representative of the exposure of the whole scene and then measuring that specific area with a test-gray-card of known reflectance. The choice of the test area is heavily dependent on the cinematographer's judgment and it becomes a non-trivial task for scenes that place the principal subject, for example, on a very bright background. In fact, independently of what type of light meter is chosen, the process of determining the correctness of the exposure for a scene as a whole can never be done solely by the numbers obtained from the meters. This process also relies heavily on the cinematographer's experience. The cinematographer's knowledge of the film stock used to capture the scenes (and the correspondent “film look” for that stock), in conjunction with light meter measurements, will guide the decision on whether to keep the set as it is, change the backlighting, include some side-lighting or any other necessary modification. In general, such decisions are made with a certain “feel” in mind for each scene.
With the aid of the meters, the cinematographer attempts to match areas of the scene with the correspondent regions of the film dynamic range, once a point of normal exposure has been established. The dynamic range of a film determines what is the maximum luminance ratio (lightest area to darkest area) in a scene that the film can reproduce well. In the motion picture industry, a widely used method to verify the exposure constancy and uniformity of a scene, after all adjustments are made by the cinematographer, is the telecine transfer. This method consists of capturing the scene onto film, processing the film and scanning the processed film in a telecine. The telecine is a machine that transfers the information contained in the film to a video format, besides allowing an operator (colorist) to perform, as needed, some artistic changes to the scene content. Also, images created from a telecine scanner (of scenes originally captured onto film) are capable of providing observers viewing those images with the sensation that the entire range of the film's color reproduction is kept, despite the fact that color film systems can reproduce significantly more color hues and saturations and gray tonalities than video systems. The result of this telecine transfer process is known to those skilled in the art as video “dailies”. An acceptable scene reproduction is obtained from dailies created on a trial-and-error basis, which is generally time-consuming. In case the cinematographer is not pleased with the scene look on the daily, the whole process is then repeated, starting with new adjustments and ending with another telecine transfer, until the desired look is obtained. There are, however, certain drawbacks to this process.
Besides being time consuming, this is also a relatively expensive process. Also, the telecine transfer method typically does not offer a quantitative assessment of the film system exposure information.
The difficulties and high costs associated with telecine transfers, along with the restrictions and limitations of light meters, highlights the need for a device that could provide inexpensive, instant and accurate feedback concerning the cinematographer's predictions and adjustments for the lighting conditions of any scene. The current art provides “video taps”, usually associated with a film camera, that inadequately address this problem. A video tap is a device that uses one or more charge-coupled-device (CCD) sensors to capture a fraction of the light reflected from the scene and later transforms that light into an electronic signal, or a number of signals, that define an image to be displayed on a monitor. After light reflected from the scene crosses the optical path of a tap, it encounters the CCD sensors, which can be simplistically described as an array of 380,000 or more opto-electrical sensing cells, and an electrical signal is generated. After going through pre-amplifiers and amplifiers, the signals from the red, green and blue channels are converted into digital signals by an analog-to-digital (A/D) converter and later are processed digitally. After processing, the signals are sent to a display device. The viewed image, however, does not necessarily reflect the exposures resultant on a film of interest, nor the appearance the scene would have after being recorded on film, processed and subsequently telecine transferred.
Typically, the problem is that an electronic image capture device, e.g., a CCD, does not “see” light in the same way that a conventional (film) system does. Moreover, in most cases, electronic camera system spectral product curves of a CCD sensor cannot be described as linear combinations of film system spectral product curves. This means that a color correction matrix (or, for that matter, any combination of matrices and 1-D look-up-tables (LUT)) cannot be obtained that will provide a perfect match between a set of CCD exposures and a set of film system exposures, when applied to one of the sets. Only an approximation can be obtained. Therefore, in the case of an electronic camera system and a conventional motion picture system, a perfect match cannot be obtained by the current state of image processing, using techniques such as matrices and LUTs. Consequently, in general, video taps yield errors in both color and tone reproduction of scenes.
A variety of methods can be found in the current art that attempt to emulate the look of a scene captured on film and transferred to video format by a telecine. For instance, U.S. Pat. No. 5,374,954, “Video System for Producing Video Image Simulating the Appearance of Motion Picture or Other Photographic Film”, describes a method in which a look-up table is responsible for reassigning (digitally) the color and tone-scale components of each pixel within the image originated on video. Assuming that a certain film stock is to be selected as the original image storing medium, the goal of the method is to generate an approximation between the content of each pixel and its correspondent image on the broadcast display of the transferred, processed film. As part of the method, several diagnostic charts are shot under controlled, even illumination conditions, using several different illuminants, on both film and video. Data measured from those tests allows for the construction of the look-up tables. One drawback of the method is that large look-up tables tend to require powerful host computers with boards capable of handling significant volumes of image processing, which implies high costs for the system. In addition, as explained above, this method provides only an approximation, and not a match, between the modified video look and the telecine-transferred film.
A more unorthodox method of correction is presented in U.S. Pat. No. 5,475,425, “Apparatus and Method for Creating Video Outputs that Emulate the Look of Motion Picture Film”. The method can be summarized as follows: the scan rate of the CCD sensors in the video camera is increased in order for it to output non-interlaced video images. This scan rate is analogous to the capture frame rate of a conventional motion picture camera. The resulting image is then converted from the analog to the video domain by an analog-to-digital (A/D) converter. A two-dimensional pattern of electronic artifacts is added to the signal with the objective of simulating film grain properties. Finally, the signal is converted back to the analog domain by a digital-to-analog converter (D/A) and displayed on an output device, such as a computer monitor. The apparatus does not offer a direct method of correction for color and tone scale reproduction of the video signal and, once again, a relatively costly approximation of the telecine-transferred film is the core of what is provided.
It is also of interest to use a video tap system for matte photography applications, i.e. green screen and especially blue screen. When special effects are created with the aid of matte photography techniques, it becomes very important to illuminate the blue screen (or green screen) properly, in order to distinguish it from the main subject during post-production. For better results, individual illumination set-ups are created for the main subject and the blue screen. A typical light meter cannot readily predict the blue exposure content of a blue screen. The reason for this is that the response curve of a typical light meter is similar to the V(λ) curve (spectral luminous efficiency), and it is not intended to meter high chroma blue materials for blue exposure. Therefore, light meters are not efficient tools in determining differences between levels of blue exposure. Cinematographers are forced to rely on a collection of rules of thumb or laborious iterative procedures in order to estimate the correctness of the exposure of a blue screen.
In summary, what has been shown in the current art demonstrates the difficulties involved in predicting how a scene will be reproduced after it is captured on film and subsequently telecine-transferred. It has also been demonstrated that “seeing” a scene in the same way that film does is not an easy task. In addition, none of the examples of current art presented offered any information regarding the individual exposure content for each of the red, green and blue channels. What is needed is a particular design of the system spectral curves that would allow a solid assessment of how a scene will be reproduced after it is captured on film and subsequently telecine-transferred, and additionally a solid assessment of the exposure content differences between the red, green and blue channels. In the latter case, this would allow a solid assessment of the exposure content differences between blue objects on the foreground and the blue screen, which enables the creation of a correct illumination set-up for both the foreground subject and the blue screen.