It is well known in the field of video displays to generate pictures on a screen by combining multiple beams of light. For example, a typical rear projection color television set includes three cathode ray tubes (CRTs), each CRT processing one of the primary colors--red, blue or green. By combining the three monochromatic beams the set can produce full color television pictures. However, in order for the set to produce accurate pictures, proper alignment of the beams must be maintained. That is, the CRTs must be calibrated so that their beams are focused at the same point on the screen. Accordingly, the calibration of the CRTs is often referred to as a convergence procedure, and beam alignment is often referred to as convergence. For a more detailed discussion of convergence, references are made to FIGS. 1 and 2.
FIG. 1 is a plan view of a model rear projection television set. The components of the set are housed within a cabinet 10, and they include: a CRT 12, a lens 14, a mirror 16, and a screen 18. The model set includes three CRTs and multiple lenses for each CRT, although for clarity, only a single CRT and a single lens are shown in the figure. The light from the CRT passes through the lens and illuminates the mirror which, in turn, reflects the light onto the screen for observation by the viewer.
FIG. 2 illustrates the relationship between the three CRTs of the model set. As can be seen from the figure, CRTs 12, 20 and 22 are matched respectively with lenses 14, 24 and 26, and the CRTs are aligned so that their beams converge. To maintain the alignment of the beams one or more photosensors are typically provided at the periphery of the screen. An example is shown in FIG. 3.
FIG. 3 includes an arrangement of four photosensors, 28, 30, 32 and 34. The sensors are located inside the cabinet and are not visible to the viewer. Also, the sensors are located behind a screen frame 36, which is not part of the display screen, and therefore the sensors do not interfere with images displayed on the screen. Nevertheless, the sensors are located within the area that can be scanned by the CRTs.
FIG. 4A shows the relationship between sensors 28-34, screen 18, and a CRT scannable area 38 as seen from the viewer's perspective. For clarity the screen frame is not shown. When performing the convergence procedure, test patterns are produced within the scannable area and detected by the sensors. More specifically, each CRT produces two test patterns, a wide pattern and a narrow pattern. Thus, to complete the convergence procedure the following patterns are produced: red-wide, red-narrow, blue-wide, blue-narrow, green-wide, and green-narrow. These patterns and their function are discussed in more detail in connection with FIGS. 4B-4E.
FIGS. 4B-4E show illustrative test patterns as generated by any one of the primary color CRTs. In the interest of brevity, FIGS. 4B-4E are discussed in the context of the red CRT only. However, it should be noted that the discussion is equally applicable to the other primary color CRTs.
FIGS. 4B and 4C show test patterns that are generated when the red CRT is properly aligned with the center of the screen. FIG. 4B shows a red-wide pattern 40 and its relative position to the scannable area, screen, and sensors. As can be seen from the figure, the red-wide pattern is made up of four illuminated areas that define a rectangle (indicated by the dotted line). Each illuminated area overlaps the entirety of one sensor. The center point of the scannable area is denoted by "o" and the center of the rectangle defined by the red-wide pattern is denoted by "x". Since the red CRT is properly aligned, the o and x coincide.
FIG. 4C shows a red-narrow pattern 42. As in the case of the wide pattern, since the CRT is properly aligned, the x and o coincide. However, in the case of the narrow pattern, only one half of each of the sensors are overlapped by the pattern. The relative sensor overlap in the wide pattern and narrow pattern cases is key to maintaining alignment of the CRT, and will be discussed in more detail below. First, FIGS. 4D and 4E are referred to in order to show the effect of misalignment on the test patterns.
FIG. 4D shows a red-wide pattern 44 that is generated when the red CRT is misaligned by an amount .delta. in the horizontal direction (left of center from the viewer's perspective). Since the pattern is sufficiently wide, it still overlaps the entirety of each of the sensors. FIG. 4E shows red-narrow pattern 46 that is generated when the red CRT is misaligned by an amount .delta. in the horizontal direction (left of center from the viewer's perspective). In FIG. 4E, since the pattern is narrow, the sensor overlap is changed relative to the overlap shown in FIG. 4C. As will be described below, this change in overlap is used to determine the amount of misalignment, which is, in turn, used as an error signal for the purpose of correcting the misalignment.
The amount of beam misalignment at a position defined by a given sensor is determined by observing that sensor's readings when exposed to the wide and narrow patterns. The observed readings are used to form a ratio which is then compared to a desired ratio, the desired ratio being the ratio obtained for the sensor under no misalignment conditions. The difference between the measured ratio and the desired ratio indicates the amount of beam misalignment. Described below is an illustrative misalignment determination as performed by sensor 28.
FIGS. 5A-5E show the relationship between sensor 28 and various test patterns. FIG. 5A depicts the sensor in a no pattern condition. FIGS. 5B-5E show the sensor as illuminated by the patterns of FIGS. 4B-4E, respectively. To measure the misalignment, the light incident on sensor 28 is measured for each of the wide and narrow cases and a ratio of the two is computed. The value of the ratio in the no misalignment case is the desired ratio, and it is obtained in the following manner: the sensor reading under no pattern conditions (noise) is subtracted from the sensor reading under wide-pattern/no-misalignment conditions (FIG. 5B) to generate a first difference; the sensor reading under no pattern conditions is subtracted from the sensor reading under narrow-pattern/no-misalignment conditions (FIG. 5C) to generate a second difference; and the second difference is divided by the first difference. To obtain the value of the ratio for the depicted misalignment: the sensor reading under no pattern conditions (noise) is subtracted from the sensor reading under wide-pattern/.delta.-misalignment conditions (FIG. 5D) to generate a first difference; the sensor reading under no pattern conditions is subtracted from the sensor reading under narrow-pattern/.delta.-misalignment conditions (FIG. 5E) to generate a second difference; and the second difference is divided by the first difference. The difference between the two ratios thus obtained indicates the amount of misalignment. The red CRT is then adjusted until the ratios match. A similar procedure is executed for the other primary beams and in this way convergence is achieved.
In order to achieve precise convergence, the ratio calculations mentioned above must be performed with a high degree of accuracy. For this purpose the calculations are typically performed digitally. However, to perform the calculations digitally the sensor readings must first be passed through an A/D converter. Thus, the sensor subsystem generally includes one or more A/D converters, which are shared by the sensors. To illustrate, the portion of the sensor subsystem associated with sensors 30 and 34 is shown in FIG. 6.
FIG. 6 shows how prior convergence subsystems implement sharing of an A/D converter 49. As can be seen from the figure, sensors 30 and 32 occupy their positions at the edge of the screen 18 and are coupled to a switching circuit 50 via couplings 52 and 54, respectively. The switching circuit includes two single-pole/single-throw switches, 56 and 58, for the purpose selectively coupling the sensors to the A/D converter. Thus, to couple sensor 30 to the A/D converter switch 56 closes, and to couple sensor 32 to the A/D converter switch 58 closes.
In prior video display systems, the switching circuit is used to simplify construction of the systems. In typical prior system configurations, the sensor couplings, e.g. 52 and 54, each have a first end connected to a sensor and a second end that runs to a common area within the set, e.g. a multi-pin connector 55, where the second end is connected to the switching circuit. In order for the switching circuit to intelligently select among the sensors, it must know which sensor each of the second ends is coupled to. One way to provide this information to the switching circuit is through careful observation during construction of the display. For example, upon construction care may be taken to assure that the second end of coupling 54 (sensor 32) is coupled to a first pin of a multi-pin connector; and that the second end of coupling 52 (sensor 30) is coupled to a second pin of a multi-pin connector. In this manner, each sensor is associated with a connector pin. Since the sensor pin assignments are predetermined at the time of construction, the assignments may be designed into the switching circuit and the switching circuit can then intelligently choose among the sensors when performing convergence testing.
However, there is a second prior method for determining the sensor/coupling associations. In the second method, it is not necessary to carefully observe the couplings during construction. In the case of the second ends being connected to a multi-pin connector, for example, it is not necessary to know which sensor is associated with each pin. Instead, a sensor/sensor coupling association routine is performed after construction.
In the sensor/sensor coupling association routine of prior systems a series of test patterns are generated, each pattern illuminating one sensor, and no two patterns illuminating the same sensor. For each pattern, the switches in the switching circuit are sequentially closed and the output of the A/D converter is monitored, the switches being closed one at a time, with no two switches being closed at the same time. When a switch is closed and it does not correspond to the coupling for the illuminated sensor, there is only a noise level output at the A/D converter. However, when a switch is closed and it does correspond to the coupling for the illuminated sensor the output rises significantly above the noise level. In this manner the switching circuit can determine coupling/sensor relationships and store a record of the relationships in a convergence subsystem memory (not shown). Of course, in order for this method to work the sensor positions must be predetermined so that the system knows where to generate the required test patterns. Furthermore, the switching circuit must include switching control circuitry (not shown) to perform such functions as recognizing which sensor is being illuminated by each test pattern and positioning switches 56 and 58 accordingly.
FIG. 7 illustrates how the sensor/sensor coupling association routine is performed in the convergence subsystem of FIG. 6. As can be seen from FIG. 7, a pattern for determining a sensor connection 60 is generated for the purpose of determining the coupling associated with sensor 32. Once the pattern is generated, switches 56 and 58 begin to sequentially close. When switch 56 closes, only noise is observed at the output of the A/D converter. In contrast, when switch 58 closes (as shown), the output of the A/D converter jumps above the noise level. Thus, by knowing which sensor has been illuminated and monitoring the output of the A/D converter, the system determines that the coupling connected to switch 56 is the coupling corresponding to sensor 32. Following determination of the sensor/sensor coupling pairings, the system may initiate convergence testing.
FIG. 8 illustrates how convergence testing is carried out in accordance with the prior convergence patterns and subsystems that have been described above. The figure includes an exemplary test pattern (namely wide pattern 44) for the purpose of discussing procedures applicable to all the prior convergence patterns. As can be seen from the figure, the test pattern illuminates both of sensors 30 and 32 at the same time. Thus, in order to obtain independent sensor readings at the output of the shared A/D converter 49, it is necessary to switch between the sensors. The necessary switching is provided through switching circuit 50. More specifically, to obtain a wide pattern reading for sensor 52, switch 56 is closed while switch 58 is open. Conversely, to obtain a wide pattern reading for sensor 54, switch 58 is closed while switch 56 is open. This process is repeated during exposure to a narrow pattern, such as pattern 46, and thus the system acquires the sensor data necessary to perform the convergence calculations.