It has become common to use a sensor to monitor some aspect of the physical world. For example, a video camera may be used to monitor a scene at a particular location. It is, furthermore, not uncommon for sensors to be remote from locations where interested parties review their observations. Consequently, various mechanisms have been devised to enable remote sensors to report their observations from remote locations. For example, some video cameras may transmit a stream of digitized video across a communication network to a storage location and/or to a digitized video viewer for immediate review. It is somewhat less common to use a cluster of remote sensors to simultaneously monitor the same location, and there are some difficulties associated with such multi-sensor monitoring.
Where multiple sensors are present at a location, it is generally desirable to synchronize their observations. However, at the present time, standards for such synchronization are typically inadequate and/or insufficiently tested. The result is a motley assortment of custom synchronization attempts, each with their own advantages and disadvantages.
One common shortcoming of such synchronization attempts is that they are custom. Custom solutions tend to be expensive solutions, and this is not of practical insignificance, but perhaps more significantly, custom solutions begin their lifetimes untested. Since untested systems tend to be unreliable, for example, due to designer inexperience and/or what are commonly referred to as system “bugs” by engineers, they are generally unsuitable for use in environments requiring high reliability. They may even be prohibited from environments where their potential unreliability put life and limb at risk. Even where a system as a whole is untested, use of well-tested system components may enhance reliability.
A particular area of functionality where this issue may arise is that of data compression. Many modern sensors generate large quantities of “raw” data so that, before a sensor's data is stored and/or transmitted across a communication network, it is desirable to compress the data. Again the example of a video camera serves well. However, data compression is a relatively complex area of art, and thus particularly susceptible to the problems of custom solutions. In addition, data compression may introduce artifacts into data generated by a sensor. For example, many video compression schemes are lossy. Consequently, where data compression is in use, it is desirable that artifacts introduced by the compression are well characterized. At the very least, custom data compression schemes introduce uncertainty with respect to such artifacts.
However, using a well-tested and/or conventional data compression component also has its problems. Such components may not support data stream synchronization, and may be inflexible with respect to reconfiguration to support such functionality. Such inflexibility is not insignificant. Some solution attempts using conventional data compression components go so far as to corrupt sensor data with synchronization data. For example, overwriting a portion of each frame in a digitized video data stream. Even more sophisticated techniques such as watermarking may introduce undesirable artifacts into sensor data, rendering the data unsuitable for some applications.