In the 1920's, a series of experiments were conducted on human sight that led to the specification of what is called the CIE xyz color space. This color space contains all perceivable colors (or gamut) that the human eye can detect. Most computer monitors, televisions and other similar devices use and RGB (red/green/blue) color space model, which is a subset of the CIE xyz color space due to the fact that these devices cannot recreate every possible human perceptible color. By combining different values of three primary colors (red, green, and blue), any color within the RGB color space gamut can be created. Not to be overlooked, white is the combination of all three primary colors and black is the absence of any color.
Most electronic displays in use today represent color with 8 bits of precision; that is, the intensity of each color channel (red, green, or blue) can be represented as an 8-bit number (0-255 decimal, or 0x00-0xFF hex). A modern electronic display is capable of producing on the order of 16.7 million distinct colors using this method.
In order to transmit data through a display device, the sequential presentation of colors representing encoded data must be presented as a “video” color stream, or alternatively, presented via dedicated software to mimic a “video” color stream—at a frame rate that can be reproduced reliably on a given display device. The refresh rate of a given display device will dictate the highest achievable video frame rate, with 60 Hz being a common baseline on desktop computer displays. 15-30 frames per second (or more) video can be reliably displayed on such devices, meaning that raw data transfer rates on the order of a few tens to a few hundred bits per second could be achieved assuming a data encoding density of 3 to 8 bits per distinct color. By increasing either or both data encoding density and number of frames displayed per second, the data transfer rate can be increased accordingly.
Many different electronic sensors are capable of detecting colors, and most work off of the same principle—a photo-sensitive device behind one or more color filters. For example, an imaging sensor that you would find in a digital camera consists of thousands (or millions) of pixels, with each individual pixel being behind a red, green, or blue color filter. By counting the number of photons hitting the sensor over a given period of time (integration), a relative digital count of each red, green and blue pixel can be ascertained—the combination of which would yield a digital representation of the sensed color.
Other than common multi-pixel imaging sensors, there also exists a class of device which is basically a dedicated “single-pixel” color sensor; that is, a sensor that is only able to detect a single color at a time. These sensors typically use an array of photo-sensitive elements, each with a corresponding red, green, or blue (and sometimes also clear) color filter—the output of such sensors is a digital count representing the overall illumination of each color channel. An example of such a sensor is the TCS3414 digital color sensor manufactured by Austria Micro Systems (AMS). Similar sensors are also manufactured by Hamamatsu and Avago Technologies as well as others. They are generally available in very small packages (approximately 2 mm.times.2 mm square) and at very low price points (a few dollars each). These sensors are used in industry for a number of purposes including monitor backlight color temperature monitoring/correction, industrial process control, instrumentation (colorimeters), consumer toys, etc.
Most electronic sensors described above do not respond equally to a given primary (red, green or blue) color. The exhibited unequal channel response, together with potential inconsistent repeatability and overall sensitivity characteristics can create challenges if such single pixel sensors were to be used to sense and decode a stream of encoded “video” color stream data. Additionally, inconsistencies between display devices (display technology, spectral response, brightness, contrast, gamma response, etc.) further complicate matters. What is needed is a novel method considering such challenges inherent in the color sensor and the transmitting display that will allow the sensor to operate at relatively high frequencies of 15-30 frames per second (or more) to decode a single-color “video” color stream, and effectively to become a single-pixel “video camera”.
In recent years, much of the research and development in the communications industry has been concentrated in the area of digital signal transmission. As is well known in the art, digital signal transmission typically involves transmission of data with a carrier frequency. The carrier frequency is modulated by data so that a frequency bandwidth is occupied by the transmitted signal. The growing demand for access to data and communication services has placed a significant strain on the available bandwidth in traditional channels. Moreover, there is an ever increasing demand for increased data communication rates for the purpose of decreasing the data transmission time. An increase of the rate of the data typically results in an increased bandwidth requirement, placing a further strain upon the available bandwidth for transmission of signals. In the case of this invention, there is no true “carrier” to modulate on top of—the changing sequence of colors themselves (the color stream) becomes an embedded “clock”.
In an effort to increase the data rates without sacrificing the available bandwidth, a number of increasingly sophisticated coded modulation schemes have been developed. For example, quadrature amplitude modulation (QAM), traditionally implemented over RF or audio channels, employs both amplitude and phase modulation in order to encode more data within a given frequency bandwidth. Another modulation technique involves multiple phase shift keying (MPSK) to increase data capacity within a given bandwidth. These high level modulation schemes are very sensitive to channel impairments. That is, the information encoded by means of such techniques is often lost during transmission due to noise, Rayleigh fading and other factors which are introduced over the communication medium.
In order to compensate for the increased sensitivity of these high level modulation schemes, various forward error correction coding techniques have been employed. One such error coding technique is trellis coded modulation (or TCM). Trellis coded modulation is desirable since it combines both coding and modulation operations to provide effective error control without sacrificing power and bandwidth efficiency. Furthermore, it has been shown that trellis coded modulation schemes perform significantly better than their un-coded equivalents with the same power and bandwidth efficiency. Trellis codes have been developed for many of the high-level, high-rate modulation schemes, including well-known 8-PSK modulation and Square 16 QAM modulation in addition to others.